Results 1 -
5 of
5
Patterns in the Chaos a Study of Performance Variation and Predictability
- in Public IaaS Clouds,” in Proc. of WWW
, 2015
"... Benchmarking the performance of public cloud providers is a common research topic. Previous research has already ex-tensively evaluated the performance of different cloud plat-forms for different use cases, and under different constraints and experiment setups. In this paper, we present a princi-ple ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Benchmarking the performance of public cloud providers is a common research topic. Previous research has already ex-tensively evaluated the performance of different cloud plat-forms for different use cases, and under different constraints and experiment setups. In this paper, we present a princi-pled, large-scale literature review to collect and codify ex-isting research regarding the predictability of performance in public Infrastructure-as-a-Service (IaaS) clouds. We for-mulate 15 hypotheses relating to the nature of performance variations in IaaS systems, to the factors of influence of per-formance variations, and how to compare different instance types. In a second step, we conduct extensive real-life ex-perimentation on Amazon EC2 and Google Compute En-gine to empirically validate those hypotheses. At the time of our research, performance in EC2 was substantially less predictable than in GCE. Further, we show that hardware heterogeneity is in practice less prevalent than anticipated by earlier research, while multi-tenancy has a dramatic im-pact on performance and predictability.
1Multi-Resource Fair Allocation in Heterogeneous Cloud Computing Systems
"... Abstract—We study the multi-resource allocation problem in cloud computing systems where the resource pool is constructed from a large number of heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, and storage. We design a multi-res ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract—We study the multi-resource allocation problem in cloud computing systems where the resource pool is constructed from a large number of heterogeneous servers, representing different points in the configuration space of resources such as processing, memory, and storage. We design a multi-resource allocation mechanism, called DRFH, that generalizes the notion of Dominant Resource Fairness (DRF) from a single server to multiple heterogeneous servers. DRFH provides a number of highly desirable properties. With DRFH, no user prefers the allocation of another user; no one can improve its allocation without decreasing that of the others; and more importantly, no coalition behavior of misreporting resource demands can benefit all its members. DRFH also ensures some level of service isolation among the users. As a direct application, we design a simple heuristic that implements DRFH in real-world systems. Large-scale simulations driven by Google cluster traces show that DRFH significantly outperforms the traditional slot-based scheduler, leading to much higher resource utilization with substantially shorter job completion times. Index Terms—Cloud computing, heterogeneous servers, job scheduling, multi-resource allocation, fairness. F 1
Heterogeneity and Interference-Aware Virtual Machine Provisioning for Predictable Performance in the Cloud
"... Abstract—Infrastructure-as-a-Service (IaaS) cloud providers offer tenants elastic computing resources in the form of virtual machine (VM) instances to run their jobs. Recently, providing predictable performance (i.e., performance guarantee) for tenant applications is becoming increasingly compelling ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Infrastructure-as-a-Service (IaaS) cloud providers offer tenants elastic computing resources in the form of virtual machine (VM) instances to run their jobs. Recently, providing predictable performance (i.e., performance guarantee) for tenant applications is becoming increasingly compelling in IaaS clouds. However, the hardware heterogeneity and performance interference across the same type of cloud VM instances can bring substantial performance variation to tenant applications, which inevitably stops the tenants from moving their performance-sensitive applications to the IaaS cloud. To tackle this issue, this paper proposes Heifer, a Heterogeneity and interference-aware VM provisioning framework for tenant applications, by focusing on MapReduce as a representative cloud application. It predicts the performance of MapReduce applications by designing a lightweight performance model using the online-measured resource utilization and capturing VM interference. Based on such a performance model, Heifer provisions the VM instances of the good-performing hardware type (i.e., the hardware that achieves the best application performance) to achieve predictable performance for tenant applications, by explicitly exploring the hardware heterogeneity and capturing VM interference. With extensive prototype experiments in our local private cloud and a real-world public cloud (i.e., Microsoft Azure) as well as complementary large-scale simulations, we demonstrate that Heifer can guarantee the job performance while saving the job budget for tenants. Moreover, our evaluation results show that Heifer can improve the job throughput of cloud datacenters, such that the revenue of cloud providers can be increased, thereby achieving a win-win situation between providers and tenants. Index Terms—Cloud computing, hardware heterogeneity, performance interference, predictable performance, VM provisioning. F 1
IEEE TRANSACTIONS ON CLOUD COMPUTING, TCCSI-2014-10-0471 1 Demystifying the Clouds: Harnessing Resource Utilization Models for Cost Effective Infrastructure Alternatives
"... Abstract — Deployment of Service Oriented Applications (SOAs) to public infrastructure-as-a-service (IaaS) clouds presents challenges to system analysts. Public clouds offer an increasing array of virtual machine types with qualitatively defined CPU, disk, and network I/O capabilities. Determining c ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — Deployment of Service Oriented Applications (SOAs) to public infrastructure-as-a-service (IaaS) clouds presents challenges to system analysts. Public clouds offer an increasing array of virtual machine types with qualitatively defined CPU, disk, and network I/O capabilities. Determining cost effective application deployments requires selecting both the quantity and type of virtual machine (VM) resources for hosting SOA workloads of interest. Hosting decisions must utilize sufficient infrastructure to meet service level objectives and cope with service demand. To support these decisions, analysts must: (1) understand how their SOA behaves in the cloud; (2) quantify representative workload(s) for execution; and (3) support service level objectives regardless of the performance limits of the hosting infrastructure. In this paper we introduce a workload cost prediction methodology which harnesses operating system time accounting principles to support equivalent SOA workload performance using alternate virtual machine (VM) types. We demonstrate how the use of resource utilization checkpointing supports capturing the total resource utilization profile for SOA workloads executed across a pool of VMs. Given these workload profiles, we develop and evaluate our cost prediction methodology using six SOAs. We demonstrate how our methodology can support finding alternate infrastructures that afford lower hosting costs while offering equal or better performance using any VM type on Amazon’s public elastic compute cloud.
The Great Expectations of Smartphone Traffic Scheduling
"... Abstract—Utilizing network traffic scheduling to improve the energy efficiency of smartphones has been studied extensively in the past few years. These studies usually take certain approaches and make some assumptions concerning traffic predictability, regardless of whether these assumptions hold or ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Utilizing network traffic scheduling to improve the energy efficiency of smartphones has been studied extensively in the past few years. These studies usually take certain approaches and make some assumptions concerning traffic predictability, regardless of whether these assumptions hold or whether the approaches have been studied before. In this paper, we conduct an analysis of existing work to find common approaches and assumptions among the proposed solutions. We find out the following: 1. A large part of the solutions target a specific (single) application or category of applications, and do not schedule the whole traffic transmitted on the smartphone. 2. A common assumption is that network traffic for smart phones is predictable. The focus of our work is to test these assumptions against real-world data and analyze whether the approaches presented in the literature are feasible. By leveraging two data sets from NetSense, we make several major contributions: 1. We demonstrate clearly, based on a large dataset, that background apps are the largest energy consumers for smart phones. 2. although some traffic traces exhibit long-term trends, in general traffic from a single app or a user is not predictable in the short-term. 3. achieving energy savings is difficult by scheduling traffic only from a specific app, since multi-app scenarios are so prevalent on today’s smartphones. We also pinpoint future directions for traffic scheduling schemes. I.