Results 1 - 10
of
73
Optimal Online Deterministic Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers
"... The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high opera ..."
Abstract
-
Cited by 51 (5 self)
- Add to MetaCart
(Show Context)
The rapid growth in demand for computational power driven by modern service applications combined with the shift to the Cloud computing model have led to the establishment of large-scale virtualized data centers. Such data centers consume enormous amounts of electrical energy resulting in high operating costs and carbon dioxide emissions. Dynamic consolidation of virtual machines (VMs) using live migration and switching idle nodes to the sleep mode allow Cloud providers to optimize resource usage and reduce energy consumption. However, the obligation of providing high quality of service to customers leads to the necessity in dealing with the energy-performance trade-off, as aggressive consolidation may lead to performance degradation. Due to the variability of workloads experienced by modern applications, the VM placement should be optimized continuously in an online manner. To understand the implications of the online nature of the problem, we conduct competitive analysis and prove competitive ratios of optimal online deterministic algorithms for the single VM migration and dynamic VM consolidation problems. Furthermore, we propose novel adaptive heuristics for dynamic consolidation of VMs based on an analysis of historical data from the resource usage by VMs. The proposed algorithms significantly reduce energy consumption, while ensuring a high level of adherence to the Service Level Agreements (SLA). We validate the high efficiency of the proposed algorithms by extensive simulations using real-world workload traces from more than a thousand
SRCMap: Energy Proportional Storage using Dynamic Consolidation
"... We investigate the problem of creating an energy proportional storage system through power-aware dynamic storage consolidation. Our proposal, Sample-Replicate-Consolidate Mapping (SRCMap), is a storage virtualization layer optimization that enables energy proportionality for dynamic I/O workloads by ..."
Abstract
-
Cited by 41 (9 self)
- Add to MetaCart
(Show Context)
We investigate the problem of creating an energy proportional storage system through power-aware dynamic storage consolidation. Our proposal, Sample-Replicate-Consolidate Mapping (SRCMap), is a storage virtualization layer optimization that enables energy proportionality for dynamic I/O workloads by consolidating the cumulative workload on a subset of physical volumes proportional to the I/O workload intensity. Instead of migrating data across physical volumes dynamically or replicating entire volumes, both of which are prohibitively expensive, SRCMap samples a subset of blocks from each data volume that constitutes its working set and replicates these on other physical volumes. During a given consolidation interval, SRCMap activates a minimal set of physical volumes to serve the workload and spins down the remaining volumes, redirecting their workload to replicas on active volumes. We present both theoretical and experimental evidence to establish the effectiveness of SRCMap in minimizing the power consumption of enterprise storage systems. 1
AutoScale: Dynamic, Robust Capacity Management for Multi-Tier Data Centers
, 2012
"... Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10–30 % of the time on average, but they are often left on, while idle, utilizing 60 % or more of peak power when in the idle state. We introduce a dynamic ..."
Abstract
-
Cited by 35 (3 self)
- Add to MetaCart
Energy costs for data centers continue to rise, already exceeding $15 billion yearly. Sadly much of this power is wasted. Servers are only busy 10–30 % of the time on average, but they are often left on, while idle, utilizing 60 % or more of peak power when in the idle state. We introduce a dynamic capacity management policy, AutoScale, that greatly reduces the number of servers needed in data centers driven by unpredictable, time-varying load, while meeting response time SLAs. AutoScale scales the data center capacity, adding or removing servers as needed. AutoScale has two key features: (i) it autonomically maintains just the right amount of spare capacity to handle bursts in the request rate; and (ii) it is robust not just to changes in the request rate of real-world traces, but also request size and server efficiency. We evaluate our dynamic capacity management approach via implementation on a 38-server multi-tier data center, serving a web site of the type seen in Facebook or Amazon, with a key-value store workload. We demonstrate that AutoScale vastly improves upon existing dynamic capacity management policies with respect to meeting SLAs and robustness.
Data center demand response: Avoiding the coincident peak via workload shifting and local generation
- In ACM SIGMETRICS
, 2013
"... Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers ’ participation in demand response is becoming increasingly important given their high and inc ..."
Abstract
-
Cited by 32 (3 self)
- Add to MetaCart
(Show Context)
Demand response is a crucial aspect of the future smart grid. It has the potential to provide significant peak demand reduction and to ease the incorporation of renewable energy into the grid. Data centers ’ participation in demand response is becoming increasingly important given their high and increasing energy consumption and their flexibility in demand management compared to conventional industrial facilities. In this paper, we study two demand response schemes to reduce a data center’s peak loads and energy expenditure: workload shifting and the use of local power generations. We conduct a detailed characterization study of coincident peak data over two decades from Fort Collins Utilities, Colorado and then develop two optimization based algorithms by combining workload scheduling and local power generation to avoid the coincident peak and reduce the energy expenditure. The first algorithm optimizes the expected cost and the second one provides the optimal worst-case guarantee. We evaluate these algorithms via trace-based simulations. The results show that using workload shifting in combination with local generation can provide significant cost savings compared to either alone. 1.
Greenware: Greening cloud-scale data centers to maximize the use of renewable energy
- of Lecture Notes in Computer Science
"... Abstract. To reduce the negative environmental implications (e.g., CO2 emission and global warming) caused by the rapidly increasing energy consumption, many Internet service operators have started taking various initiatives to operate their cloud-scale data centers with renewable energy. Unfortunat ..."
Abstract
-
Cited by 24 (2 self)
- Add to MetaCart
(Show Context)
Abstract. To reduce the negative environmental implications (e.g., CO2 emission and global warming) caused by the rapidly increasing energy consumption, many Internet service operators have started taking various initiatives to operate their cloud-scale data centers with renewable energy. Unfortunately, due to the intermittent nature of renewable energy sources such as wind turbines and solar panels, currently renewable energy is often more expensive than brown energy that is produced with conventional fossil-based fuel. As a result, utilizing renewable energy may impose a considerable pressure on the sometimes stringent operation budgets of Internet service operators. Therefore, two key questions faced by many cloud-service operators are 1) how to dynamically distribute service requests among data centers in different geographical locations, based on the local weather conditions, to maximize the use of renewable energy, and 2) how to do that within their allowed operation budgets. In this paper, we propose GreenWare, a novel middleware system that
Managing Overloaded Hosts for Dynamic Consolidation of Virtual Machines in Cloud Data Centers Under Quality of Service Constraints
- IEEE Transactions on Parallel and Distributed Systems (TPDS
, 2012
"... Abstract—Dynamic consolidation of Virtual Machines (VMs) is an effective way to improve the utilization of resources and energy efficiency in Cloud data centers. Determining when it is best to reallocate VMs from an overloaded host is an aspect of dynamic VM consolidation that directly influences th ..."
Abstract
-
Cited by 24 (2 self)
- Add to MetaCart
(Show Context)
Abstract—Dynamic consolidation of Virtual Machines (VMs) is an effective way to improve the utilization of resources and energy efficiency in Cloud data centers. Determining when it is best to reallocate VMs from an overloaded host is an aspect of dynamic VM consolidation that directly influences the resource utilization and Quality of Service (QoS) delivered by the system. The influence on the QoS is explained by the fact that server overloads cause resource shortages and performance degradation of applications. Current solutions to the problem of host overload detection are generally heuristic-based, or rely on statistical analysis of historical data. The limitations of these approaches are that they lead to sub-optimal results and do not allow explicit specification of a QoS goal. We propose a novel approach that for any known stationary workload and a given state configuration optimally solves the problem of host overload detection by maximizing the mean inter-migration time under the specified QoS goal based on a Markov chain model. We heuristically adapt the algorithm to handle unknown non-stationary workloads using the Multisize Sliding Window workload estimation technique. Through simulations with real-world workload traces from more than a thousand PlanetLab VMs, we show that our approach outperforms the best benchmark algorithm and provides approximately 88 % of the performance of the optimal offline algorithm. Index Terms—Distributed systems, Cloud computing, virtualization, dynamic consolidation, energy efficiency, host overload detection. 1
PAC: Pattern-driven Application Consolidation for Efficient Cloud Computing
- in Proc. of MASCOTS
, 2010
"... Abstract—To reduce cloud system resource cost, application consolidation is a must. In this paper, we present a novel patterndriven application consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures. PAC employs signal processing techniques to ..."
Abstract
-
Cited by 22 (5 self)
- Add to MetaCart
(Show Context)
Abstract—To reduce cloud system resource cost, application consolidation is a must. In this paper, we present a novel patterndriven application consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures. PAC employs signal processing techniques to dynamically discover significant patterns called signatures of different applications and hosts. PAC then performs dynamic application consolidation based on the extracted signatures. We have implemented a prototype of the PAC system on top of the Xen virtual machine platform and tested it on the NCSU Virtual Computing Lab. We have tested our system using RUBiS benchmarks, Hadoop data processing systems, and IBM System S stream processing system. Our experiments show that 1) PAC can efficiently discover repeating resource usage patterns in the tested applications; 2) Signatures can reduce resource prediction errors by 50-90% compared to traditional coarse-grained schemes; 3) PAC can improve application performance by up to 50 % when running a large number of applications on a shared cluster. I.
Dynamic energy-aware capacity provisioning for cloud computing environments
- In Proc. IEEE/ACM Int. Conf. Autonomic Computing (ICAC
, 2012
"... Data centers have recently gained significant popularity as a cost-effective platform for hosting large-scale service appli-cations. While large data centers enjoy economies of scale by amortizing initial capital investment over large number of machines, they also incur tremendous energy cost in ter ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
(Show Context)
Data centers have recently gained significant popularity as a cost-effective platform for hosting large-scale service appli-cations. While large data centers enjoy economies of scale by amortizing initial capital investment over large number of machines, they also incur tremendous energy cost in terms of power distribution and cooling. An effective approach for saving energy in data centers is to adjust dynamically the data center capacity by turning off unused machines. How-ever, this dynamic capacity provisioning problem is known to be challenging as it requires a careful understanding of the resource demand characteristics as well as considerations to various cost factors, including task scheduling delay, ma-chine reconfiguration cost and electricity price fluctuation. In this paper, we provide a control-theoretic solution to the dynamic capacity provisioning problem that minimizes the total energy cost while meeting the performance objec-tive in terms of task scheduling delay. Specifically, we model this problem as a constrained discrete-time optimal control problem, and use Model Predictive Control (MPC) to find the optimal control policy. Through extensive analysis and simulation using real workload traces from Google’s compute clusters, we show that our proposed framework can achieve significant reduction in energy cost, while maintaining an acceptable average scheduling delay for individual tasks.
A.: Minimizing data center SLA violations and power consumption via hybrid resource provisioning. In:
- Proc. of the 2nd Int. Green Computing Conf. (IGCC’10).
, 2011
"... ..."
(Show Context)
WattApp: An Application Aware Power Meter for Shared Data Centers
- In ICAC’10
"... The increasing heterogeneity between applications in emerging vir-tualized data centers like clouds introduce significant challenges in estimating the power drawn by the data center. In this work, we present WattApp: an application-aware power meter for shared data centers that addresses this challe ..."
Abstract
-
Cited by 21 (3 self)
- Add to MetaCart
The increasing heterogeneity between applications in emerging vir-tualized data centers like clouds introduce significant challenges in estimating the power drawn by the data center. In this work, we present WattApp: an application-aware power meter for shared data centers that addresses this challenge. In order to deal with hetero-geneous applications, WattApp introduces application parameters (e.g, throughput) in the power modeling framework. WattApp is based on a carefully designed set of experiments on a mix of di-verse applications: power benchmarks, web-transaction workloads, HPC workloads and I/O-intensive workloads. Given a set of N ap-plications and M server types, WattApp runs in O(N) time, uses O(N ×M) calibration runs, and predicts the power drawn by any arbitrary placement within 5 % of the real power for the applications studied.