Results 11 - 20
of
23
Cloud Computing for the Power Grid: From Service Composition to Assured Clouds
"... Abstract The electric power industry is one of the few industries where cloud computing has not yet found much adoption, even though electric power utilities rely heavily on communications and computation to plan, operate and analyze power systems. In this paper we explore the reasons for this phen ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract The electric power industry is one of the few industries where cloud computing has not yet found much adoption, even though electric power utilities rely heavily on communications and computation to plan, operate and analyze power systems. In this paper we explore the reasons for this phenomenon. We identify a variety of power system applications that could benefit from cloud computing. We then discuss the security requirements of these applications, and explore the design space for providing the security properties through application layer composition and via assured cloud computing. We argue that a combination of these two approaches will be needed to meet diverse application requirements at a cost that can justify the use of cloud computing.
Heterogeneity and Interference-Aware Virtual Machine Provisioning for Predictable Performance in the Cloud
"... Abstract—Infrastructure-as-a-Service (IaaS) cloud providers offer tenants elastic computing resources in the form of virtual machine (VM) instances to run their jobs. Recently, providing predictable performance (i.e., performance guarantee) for tenant applications is becoming increasingly compelling ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Infrastructure-as-a-Service (IaaS) cloud providers offer tenants elastic computing resources in the form of virtual machine (VM) instances to run their jobs. Recently, providing predictable performance (i.e., performance guarantee) for tenant applications is becoming increasingly compelling in IaaS clouds. However, the hardware heterogeneity and performance interference across the same type of cloud VM instances can bring substantial performance variation to tenant applications, which inevitably stops the tenants from moving their performance-sensitive applications to the IaaS cloud. To tackle this issue, this paper proposes Heifer, a Heterogeneity and interference-aware VM provisioning framework for tenant applications, by focusing on MapReduce as a representative cloud application. It predicts the performance of MapReduce applications by designing a lightweight performance model using the online-measured resource utilization and capturing VM interference. Based on such a performance model, Heifer provisions the VM instances of the good-performing hardware type (i.e., the hardware that achieves the best application performance) to achieve predictable performance for tenant applications, by explicitly exploring the hardware heterogeneity and capturing VM interference. With extensive prototype experiments in our local private cloud and a real-world public cloud (i.e., Microsoft Azure) as well as complementary large-scale simulations, we demonstrate that Heifer can guarantee the job performance while saving the job budget for tenants. Moreover, our evaluation results show that Heifer can improve the job throughput of cloud datacenters, such that the revenue of cloud providers can be increased, thereby achieving a win-win situation between providers and tenants. Index Terms—Cloud computing, hardware heterogeneity, performance interference, predictable performance, VM provisioning. F 1
Matrix: Achieving Predictable Virtual Machine Performance in the Clouds
"... The success of cloud computing builds largely upon on-demand supply of virtual machines (VMs) that pro-vide the abstraction of a physical machine on shared re-sources. Unfortunately, despite recent advances in virtu-alization technology, there still exists an unpredictable performance gap between th ..."
Abstract
- Add to MetaCart
(Show Context)
The success of cloud computing builds largely upon on-demand supply of virtual machines (VMs) that pro-vide the abstraction of a physical machine on shared re-sources. Unfortunately, despite recent advances in virtu-alization technology, there still exists an unpredictable performance gap between the real and desired perfor-mance. The main contributing factors include contention to the shared physical resources among co-located VMs, limited control of VM allocation, as well as lack of knowledge on the performance of a specific VM out of tens of VM types offered by public cloud providers. In this work, we propose Matrix, a novel performance and resource management system that ensures the de-sired performance of an application achieved on a VM. To this end, Matrix utilizes machine learning methods-clustering models with probability estimates- to predict the performance of new workloads in a virtualized en-vironment, choose a suitable VM type, and dynamically adjust the resource configuration of a virtual machine on the fly. The evaluations on a private cloud, and two pub-lic clouds (Rackspace and Amazon EC2) show that for an extensive set of cloud applications, Matrix is able to estimate application performance with average 90 % ac-curacy. In addition, Matrix can deliver the target per-formance within 3 % variance, and do so with the best cost-efficiency in most cases. 1
IEEE TRANSACTIONS ON CLOUD COMPUTING, TCCSI-2014-10-0471 1 Demystifying the Clouds: Harnessing Resource Utilization Models for Cost Effective Infrastructure Alternatives
"... Abstract — Deployment of Service Oriented Applications (SOAs) to public infrastructure-as-a-service (IaaS) clouds presents challenges to system analysts. Public clouds offer an increasing array of virtual machine types with qualitatively defined CPU, disk, and network I/O capabilities. Determining c ..."
Abstract
- Add to MetaCart
Abstract — Deployment of Service Oriented Applications (SOAs) to public infrastructure-as-a-service (IaaS) clouds presents challenges to system analysts. Public clouds offer an increasing array of virtual machine types with qualitatively defined CPU, disk, and network I/O capabilities. Determining cost effective application deployments requires selecting both the quantity and type of virtual machine (VM) resources for hosting SOA workloads of interest. Hosting decisions must utilize sufficient infrastructure to meet service level objectives and cope with service demand. To support these decisions, analysts must: (1) understand how their SOA behaves in the cloud; (2) quantify representative workload(s) for execution; and (3) support service level objectives regardless of the performance limits of the hosting infrastructure. In this paper we introduce a workload cost prediction methodology which harnesses operating system time accounting principles to support equivalent SOA workload performance using alternate virtual machine (VM) types. We demonstrate how the use of resource utilization checkpointing supports capturing the total resource utilization profile for SOA workloads executed across a pool of VMs. Given these workload profiles, we develop and evaluate our cost prediction methodology using six SOAs. We demonstrate how our methodology can support finding alternate infrastructures that afford lower hosting costs while offering equal or better performance using any VM type on Amazon’s public elastic compute cloud.
The State of Public Infrastructure-as-a-Service Cloud Security
"... The public Infrastructure-as-a-Service (IaaS) cloud industry has reached a critical mass in the past few years, with many cloud service providers fielding competing services. Despite the competition, we find some of the security mechanisms offered by the services to be similar, indicating that the c ..."
Abstract
- Add to MetaCart
(Show Context)
The public Infrastructure-as-a-Service (IaaS) cloud industry has reached a critical mass in the past few years, with many cloud service providers fielding competing services. Despite the competition, we find some of the security mechanisms offered by the services to be similar, indicating that the cloud industry has established a number of “best-practices, ” while other security mechanisms vary widely, indicating that there is also still room for innovation and experimentation. We investigate these differences and possible underlying reasons for it. We also contrast the security mechanisms offered by public IaaS cloud offerings and with security mechanisms proposed by academia over the same period. Finally, we speculate on how industry and academia might work together to solve the pressing security problems in public IaaS clouds going forward. CCS Concepts: Security and privacy→Virtualization and security; Distributed systems security; Networks→Cloud computing
A Placement Vulnerability Study in Multi-Tenant Public Clouds
"... Public infrastructure-as-a-service clouds, such as Amazon EC2, Google Compute Engine (GCE) and Microsoft Azure allow clients to run virtual machines (VMs) on shared phys-ical infrastructure. This practice of multi-tenancy brings economies of scale, but also introduces the risk of sharing a physical ..."
Abstract
- Add to MetaCart
(Show Context)
Public infrastructure-as-a-service clouds, such as Amazon EC2, Google Compute Engine (GCE) and Microsoft Azure allow clients to run virtual machines (VMs) on shared phys-ical infrastructure. This practice of multi-tenancy brings economies of scale, but also introduces the risk of sharing a physical server with an arbitrary and potentially malicious VM. Past works have demonstrated how to place a VM alongside a target victim (co-location) in early-generation clouds and how to extract secret information via side-channels. Although there have been numerous works on side-channel attacks, there have been no studies on place-ment vulnerabilities in public clouds since the adoption of stronger isolation technologies such as Virtual Private Clouds (VPCs). We investigate this problem of placement vulnerabili-ties and quantitatively evaluate three popular public clouds for their susceptibility to co-location attacks. We find that adoption of new technologies (e.g., VPC) makes many prior attacks, such as cloud cartography, ineffective. We find new ways to reliably test for co-location across Amazon EC2, Google GCE, and Microsoft Azure. We also find ways to detect co-location with victim web servers in a multi-tiered cloud application located behind a load balancer. We use our new co-residence tests and multiple customer accounts to launch VM instances under different strategies that seek to maximize the likelihood of co-residency. We find that it is much easier (10 × higher success rate) and cheaper (up to $114 less) to achieve co-location in these three clouds when compared to a secure reference placement policy.
Impact of Instance Seeking Strategies on Resource Allocation in Cloud Data Centers
"... Abstract—With the prosperity of cloud computing, an in- ..."