Results 1 - 10
of
23
Next Stop, the Cloud: Understanding Modern Web Service Deployment in EC2 and Azure
"... An increasingly large fraction of Internet services are hosted on a cloud computing system such as Amazon EC2 or Windows Azure. But to date, no in-depth studies about cloud usage by Internet services has been performed. We provide a detailed measurement study to shed light on how modern web service ..."
Abstract
-
Cited by 10 (3 self)
- Add to MetaCart
(Show Context)
An increasingly large fraction of Internet services are hosted on a cloud computing system such as Amazon EC2 or Windows Azure. But to date, no in-depth studies about cloud usage by Internet services has been performed. We provide a detailed measurement study to shed light on how modern web service deployments use the cloud and to identify ways in which cloud-using services might improve these deployments. Our results show that: 4 % of the Alexa top million use EC2/Azure; there exist several common deployment patterns for cloud-using web service front ends; and services can significantly improve their wide-area performance and failure tolerance by making better use of existing regional diversity in EC2. Driving these analyses are several new datasets, including one with over 34 million DNS records for Alexa websites and apacketcapturefromalargeuniversitynetwork.
Scheduler-based Defenses against Cross-VM Sidechannels, 2014. Full version, available from authors’ web pages
"... Public infrastructure-as-a-service clouds, such as Ama-zon EC2 and Microsoft Azure allow arbitrary clients to run virtual machines (VMs) on shared physical in-frastructure. This practice of multi-tenancy brings economies of scale, but also introduces the threat of malicious VMs abusing the schedulin ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
(Show Context)
Public infrastructure-as-a-service clouds, such as Ama-zon EC2 and Microsoft Azure allow arbitrary clients to run virtual machines (VMs) on shared physical in-frastructure. This practice of multi-tenancy brings economies of scale, but also introduces the threat of malicious VMs abusing the scheduling of shared re-sources. Recent works have shown how to mount cross-VM side-channel attacks to steal cryptographic secrets. The straightforward solution is hard isolation that dedi-cates hardware to each VM. However, this comes at the cost of reduced efficiency. We investigate the principle of soft isolation: reduce the risk of sharing through better scheduling. With ex-perimental measurements, we show that a minimum run time (MRT) guarantee for VM virtual CPUs that lim-its the frequency of preemptions can effectively prevent existing Prime+Probe cache-based side-channel attacks. Through experimental measurements, we find that the performance impact of MRT guarantees can be very low, particularly in multi-core settings. Finally, we integrate a simple per-core CPU state cleansing mechanism, a form of hard isolation, into Xen. It provides further protection against side-channel attacks at little cost when used in conjunction with an MRT guarantee. 1
Is the Same Instance Type Created Equal? Exploiting Heterogeneity of Public Clouds
- IEEE TRANSACTIONS ON CLOUD COMPUTING
"... Public cloud platforms might start with homogeneous hardware; nevertheless, because of inevitable hardware upgrades, or adding more capacity, the initial homogeneous platform will gradually evolve into heterogeneous as time passes by. The consequent performance heterogeneity is of concern to cloud ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
Public cloud platforms might start with homogeneous hardware; nevertheless, because of inevitable hardware upgrades, or adding more capacity, the initial homogeneous platform will gradually evolve into heterogeneous as time passes by. The consequent performance heterogeneity is of concern to cloud users. In this article, we evaluate performance variations from hardware heterogeneity and scheduling mechanisms of public clouds. Amazon Elastic Compute Cloud (Amazon EC2) and Rackspace Cloud are used as the representatives because of their relatively long record and wide usage among small and medium enterprises (SMEs). A comprehensive set of micro-benchmarks and application-level macro-benchmarks have been used to investigate performance variation. Several major contributions have been made. Firstly, we find out that heterogeneous hardware is a commonality among the relatively long-lasting cloud platforms, although the level of heterogeneity varies. Secondly, we observe that heterogeneous hardware is the primary culprit of performance variation of cloud platforms. Thirdly, we discover that varied CPU acquisition percentages and different virtual machine scheduling mechanisms exacerbate the performance variation problem, especially for network related operations. Finally, based on the observations, we propose cost-saving approaches and analyze Nash equilibrium from cloud user perspective. By using a simple “trial-and-better” approach, i.e., keep good-performing instances and discard bad-performing instances, cloud users can achieve up to 30 % cost saving.
C5: Cross-Cores Cache Covert Channel
- In Proceedings of the 12th Conference on Detection of Intrusions and Malware, and Vulnerability Assessment (DIMVA
, 2015
"... Abstract. Cloud computing relies on hypervisors to isolate virtual ma-chines running on shared hardware. Since perfect isolation is difficult to achieve, sharing hardware induces threats. Covert channels were demon-strated to violate isolation and, typically, allow data exfiltration. Several covert ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract. Cloud computing relies on hypervisors to isolate virtual ma-chines running on shared hardware. Since perfect isolation is difficult to achieve, sharing hardware induces threats. Covert channels were demon-strated to violate isolation and, typically, allow data exfiltration. Several covert channels have been proposed that rely on the processor’s cache. However, these covert channels are either slow or impractical due to the addressing uncertainty. This uncertainty exists in particular in virtual-ized environments and with recent L3 caches which are using complex addressing. Using shared memory would elude addressing uncertainty, but shared memory is not available in most practical setups. We build C5, a covert channel that tackles addressing uncertainty with-out requiring any shared memory, making the covert channel fast and practical. We are able to transfer messages on modern hardware across any cores of the same processor. The covert channel targets the last level cache that is shared across all cores. It exploits the inclusive feature of caches, allowing a core to evict lines in the private first level cache of another core. We experimentally evaluate the covert channel in native and virtualized environments. In particular, we successfully establish a covert channel between virtual machines running on different cores. We measure a bitrate of 1291bps for a native setup, and 751bps for a virtu-alized setup. This is one order of magnitude above previous cache-based covert channels in the same setup.
User-Centric Heterogeneity-Aware MapReduce Job Provisioning in the Public Cloud User-Centric Heterogeneity-Aware MapReduce Job Provisioning in the Public Cloud
"... Abstract Cloud datacenters are becoming increasingly heterogeneous with respect to the hardware on which virtual machine (VM) instances are hosted. As a result, ostensibly identical instances in the cloud show significant performance variability depending on the physical machines that host them. In ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Abstract Cloud datacenters are becoming increasingly heterogeneous with respect to the hardware on which virtual machine (VM) instances are hosted. As a result, ostensibly identical instances in the cloud show significant performance variability depending on the physical machines that host them. In our case study on Amazon's EC2 public cloud, we observe that the average execution time of Hadoop MapReduce jobs vary by up to 30% in spite of using identical VM instances for the Hadoop cluster. In this paper, we propose and develop U-CHAMPION, a user-centric middleware that automates job provisioning and configuration of the Hadoop MapReduce framework in a public cloud to improve job performance and reduce the cost of leasing VM instances. It addresses the unique challenges of hardware heterogeneity-aware job provisioning in the public cloud through a novel selective-instance-reacquisition technique. It applies a collaborative filtering technique based on UV Decomposition for online estimation of ad-hoc job execution time. We have implemented U-CHAMPION on Amazon EC2 and compared it with a representative automated MapReduce job provisioning system. Experimental results with the PUMA benchmarks show that U-CHAMPION improves MapReduce job performance and reduces the cost of leasing VM instances by as much as 21%.
ClouDiA: A Deployment Advisor for Public Clouds
"... An increasing number of distributed data-driven applications are moving into shared public clouds. By sharing resources and operating at scale, public clouds promise higher utilization and lower costs than private clusters. To achieve high utilization, however, cloud providers inevitably allocate vi ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
An increasing number of distributed data-driven applications are moving into shared public clouds. By sharing resources and operating at scale, public clouds promise higher utilization and lower costs than private clusters. To achieve high utilization, however, cloud providers inevitably allocate virtual machine instances noncontiguously, i.e., instances of a given application may end up in physically distant machines in the cloud. This allocation strategy can lead to large differences in average latency between instances. For a large class of applications, this difference can result in significant performance degradation, unless care is taken in how application components are mapped to instances. In this paper, we propose ClouDiA, a general deployment advisor that selects application node deployments minimizing either (i) the largest latency between application nodes, or (ii) the longest critical path among all application nodes. ClouDiA employs mixedinteger programming and constraint programming techniques to efficiently search the space of possible mappings of application nodes to instances. Through experiments with synthetic and real applications in Amazon EC2, we show that our techniques yield a 15 % to 55 % reduction in time-to-solution or service response time, without any need for modifying application code. 1.
Patterns in the Chaos a Study of Performance Variation and Predictability
- in Public IaaS Clouds,” in Proc. of WWW
, 2015
"... Benchmarking the performance of public cloud providers is a common research topic. Previous research has already ex-tensively evaluated the performance of different cloud plat-forms for different use cases, and under different constraints and experiment setups. In this paper, we present a princi-ple ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Benchmarking the performance of public cloud providers is a common research topic. Previous research has already ex-tensively evaluated the performance of different cloud plat-forms for different use cases, and under different constraints and experiment setups. In this paper, we present a princi-pled, large-scale literature review to collect and codify ex-isting research regarding the predictability of performance in public Infrastructure-as-a-Service (IaaS) clouds. We for-mulate 15 hypotheses relating to the nature of performance variations in IaaS systems, to the factors of influence of per-formance variations, and how to compare different instance types. In a second step, we conduct extensive real-life ex-perimentation on Amazon EC2 and Google Compute En-gine to empirically validate those hypotheses. At the time of our research, performance in EC2 was substantially less predictable than in GCE. Further, we show that hardware heterogeneity is in practice less prevalent than anticipated by earlier research, while multi-tenancy has a dramatic im-pact on performance and predictability.
1Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems
"... Abstract—We study the multi-resource allocation problem in cloud computing systems where the resource pool is constructed from a large number of heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, and storage. We design a multi-res ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Abstract—We study the multi-resource allocation problem in cloud computing systems where the resource pool is constructed from a large number of heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, and storage. We design a multi-resource allocation mechanism, called DRFH, that generalizes the notion of Dominant Resource Fairness (DRF) from a single server to multiple heterogeneous servers. DRFH provides a number of highly desirable properties. With DRFH, no user prefers the allocation of another user; no one can improve its allocation without decreasing that of the others; and more importantly, no coalition behavior of misreporting resource demands can benefit all its members. DRFH also ensures some level of service isolation among the users. As a direct application, we design a simple heuristic that implements DRFH in real-world systems. Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times. Index Terms—Cloud computing, heterogeneous servers, job scheduling, multi-resource allocation, fairness. F 1
Cloud WorkBench – Infrastructure-as-Code Based Cloud Benchmarking
"... To optimally deploy their applications, users of Infrastructure-as-a-Service clouds are required to eval-uate the costs and performance of different combina-tions of cloud configurations to find out which combi-nation provides the best service level for their specific application. Unfortunately, ben ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
To optimally deploy their applications, users of Infrastructure-as-a-Service clouds are required to eval-uate the costs and performance of different combina-tions of cloud configurations to find out which combi-nation provides the best service level for their specific application. Unfortunately, benchmarking cloud services is cumbersome and error-prone. In this paper, we propose an architecture and concrete implementation of a cloud benchmarking Web service, which fosters the definition of reusable and representative benchmarks. In distinction to existing work, our system is based on the notion of Infrastructure-as-Code, which is a state of the art concept to define IT infrastructure in a reproducible, well-defined, and testable way. We demonstrate our system based on an illustrative case study, in which we measure and compare the disk IO speeds of different instance and storage types in Amazon EC2. I.
IS
, 2013
"... performance to improve multi-Cloud architecture on cosmological simulation use case ..."
Abstract
- Add to MetaCart
(Show Context)
performance to improve multi-Cloud architecture on cosmological simulation use case