Results 1 - 10
of
450
Powernap: Eliminating server idle power
- In International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS
, 2009
"... Data center power consumption is growing to unprece-dented levels: the EPA estimates U.S. data centers will con-sume 100 billion kilowatt hours annually by 2011. Much of this energy is wasted in idle systems: in typical deployments, server utilization is below 30%, but idle servers still con-sume 60 ..."
Abstract
-
Cited by 219 (4 self)
- Add to MetaCart
Data center power consumption is growing to unprece-dented levels: the EPA estimates U.S. data centers will con-sume 100 billion kilowatt hours annually by 2011. Much of this energy is wasted in idle systems: in typical deployments, server utilization is below 30%, but idle servers still con-sume 60 % of their peak power draw. Typical idle periods— though frequent—last seconds or less, confounding simple energy-conservation approaches. In this paper, we propose PowerNap, an energy-conservation approach where the entire system transitions rapidly be-tween a high-performance active state and a near-zero-power idle state in response to instantaneous load. Rather than requiring fine-grained power-performance states and complex load-proportional operation from each system com-ponent, PowerNap instead calls for minimizing idle power and transition time, which are simpler optimization goals. Based on the PowerNap concept, we develop requirements and outline mechanisms to eliminate idle power waste in en-terprise blade servers. Because PowerNap operates in low-efficiency regions of current blade center power supplies, we introduce the Redundant Array for Inexpensive Load Shar-ing (RAILS), a power provisioning approach that provides high conversion efficiency across the entire range of Power-Nap’s power demands. Using utilization traces collected from enterprise-scale commercial deployments, we demon-strate that, together, PowerNap and RAILS reduce average server power consumption by 74%.
Cutting the Electric Bill for Internet-Scale Systems
"... Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and ..."
Abstract
-
Cited by 203 (3 self)
- Add to MetaCart
(Show Context)
Energy expenses are becoming an increasingly important fraction of data center operating costs. At the same time, the energy expense per unit of computation can vary significantly between two different locations. In this paper, we characterize the variation due to fluctuating electricity prices and argue that existing distributed systems should be able to exploit this variation for significant economic gains. Electricity prices exhibit both temporal and geographic variation, due to regional demand differences, transmission inefficiencies, and generation diversity. Starting with historical electricity prices, for twenty nine locations in the US, and network traffic data collected on Akamai’s CDN, we use simulation to quantify the possible economic gains for a realistic workload. Our results imply that existing systems may be able to save millions of dollars a year in electricity costs, by being cognizant of locational computation cost differences. Categories andSubject Descriptors
VirtualPower: Coordinated Power Management in Virtualized Enterprise Systems
- In Proceedings of International Symposium on Operating System Principles (SOSP
, 2007
"... Power management has become increasingly necessary in large-scale datacenters to address costs and limitations in cooling or power delivery. This paper explores how to integrate power management mechanisms and policies with the virtualization technologies being actively deployed in these environment ..."
Abstract
-
Cited by 161 (12 self)
- Add to MetaCart
(Show Context)
Power management has become increasingly necessary in large-scale datacenters to address costs and limitations in cooling or power delivery. This paper explores how to integrate power management mechanisms and policies with the virtualization technologies being actively deployed in these environments. The goals of the proposed VirtualPower approach to online power management are (i) to support the isolated and independent operation assumed by guest virtual machines (VMs) running on virtualized platforms and (ii) to make it possible to control and globally coordinate the effects of the diverse power management policies applied by these VMs to virtualized resources. To attain these goals, VirtualPower extends to guest VMs ‘soft ’ versions of the hardware power states for which their policies are designed. The resulting technical challenge is to appropriately map VM-level updates made to soft power states to actual changes in the states or in the allocation of underlying virtualized hardware. An implementation of VirtualPower Management (VPM) for the Xen hypervisor addresses this challenge by provision of multiple system-level abstractions including VPM states, channels, mechanisms, and rules. Experimental evaluations on modern multicore platforms highlight resulting improvements in online power management capabilities, including minimization of power consumption with little or no performance penalties and the ability to throttle power consumption while still meeting application requirements. Finally, coordination of online methods for server consolidation with VPM management techniques in heterogeneous server systems is shown to provide up to 34% improvements in power consumption.
Energy-Aware Server Provisioning and Load Dispatching for Connection-Intensive Internet Services
"... Energy consumption in hosting Internet services is becoming a pressing issue as these services scale up. Dynamic server provisioning techniques are effective in turning off unnecessary servers to save energy. Such techniques, mostly studied for request-response services, face challenges in the conte ..."
Abstract
-
Cited by 158 (12 self)
- Add to MetaCart
(Show Context)
Energy consumption in hosting Internet services is becoming a pressing issue as these services scale up. Dynamic server provisioning techniques are effective in turning off unnecessary servers to save energy. Such techniques, mostly studied for request-response services, face challenges in the context of connection servers that host a large number of long-lived TCP connections. In this paper, we characterize unique properties, performance, and power models of connection servers, based on a real data trace collected from the deployed Windows Live Messenger. Using the models, we design server provisioning and load dispatching algorithms and study subtle interactions between them. We show that our algorithms can save a significant amount of energy without sacrificing user experiences. 1
Reducing Network Energy Consumption via Sleeping and Rate-Adaptation
"... We present the design and evaluation of two forms of power management schemes that reduce the energy consumption of networks. The first is based on putting network components to sleep during idle times, reducing energy consumed in the absence of packets. The second is based on adapting the rate of n ..."
Abstract
-
Cited by 137 (2 self)
- Add to MetaCart
We present the design and evaluation of two forms of power management schemes that reduce the energy consumption of networks. The first is based on putting network components to sleep during idle times, reducing energy consumed in the absence of packets. The second is based on adapting the rate of network operation to the offered workload, reducing the energy consumed when actively processing packets. For real-world traffic workloads and topologies and using power constants drawn from existing network equipment, we show that even simple schemes for sleeping or rate-adaptation can offer substantial savings. For instance, our practical algorithms stand to halve energy consumption for lightly utilized networks (10-20%). We show that these savings approach the maximum achievable by any algorithms using the same power management primitives. Moreover this energy can be saved without noticeably increasing loss and with a small and controlled increase in latency (<10ms). Finally, we show that both sleeping and rate adaptation are valuable depending (primarily) on the power profile of network equipment and the utilization of the network itself.
Optimal Power Allocation in Server Farms
, 2009
"... Server farms today consume more than 1.5 % of the total electricity in the U.S. at a cost of nearly $4.5 billion. Given the rising cost of energy, many industries are now seeking solutions for how to best make use of their available power. An important question which arises in this context is how to ..."
Abstract
-
Cited by 102 (3 self)
- Add to MetaCart
Server farms today consume more than 1.5 % of the total electricity in the U.S. at a cost of nearly $4.5 billion. Given the rising cost of energy, many industries are now seeking solutions for how to best make use of their available power. An important question which arises in this context is how to distribute available power among servers in a server farm so as to get maximum performance. By giving more power to a server, one can get higher server frequency (speed). Hence it is commonly believed that, for a given power budget, performance can be maximized by operating servers at their highest power levels. However, it is also conceivable that one might prefer to run servers at their lowest power levels, which allows more servers to be turned on for a given power budget. To fully understand the effect of power allocation on performance in a server farm with a fixed power budget, we introduce a queueing theoretic model, which allows us to predict the optimal power allocation in a variety of scenarios. Results are verified via extensive experiments on an IBM BladeCenter. We find that the optimal power allocation varies for different scenarios. In particular, it is not always optimal to run servers at their maximum power levels. There are scenarios where it might be optimal to run servers at their lowest power levels or at some intermediate power levels. Our analysis shows that the optimal power allocation is non-obvious and depends on many factors such as the power-to-frequency relationship in the processors, the arrival rate of jobs, the maximum server frequency, the lowest attainable server frequency and the server farm configuration. Furthermore, our theoretical model allows us to explore more general settings than we can implement, including arbitrarily large server farms and different power-to-frequency curves. Importantly, we show that the optimal power allocation can significantly
Minimizing electricity cost: Optimization of distributed internet data centers in a multi-electricity-market environment
- In Proc. of INFOCOM
, 2010
"... Abstract—The study of Cyber-Physical System (CPS) has been an active area of research. Internet Data Center (IDC) is an important emerging Cyber-Physical System. As the demand on Internet services drastically increases in recent years, the power used by IDCs has been skyrocketing. While most existin ..."
Abstract
-
Cited by 98 (9 self)
- Add to MetaCart
(Show Context)
Abstract—The study of Cyber-Physical System (CPS) has been an active area of research. Internet Data Center (IDC) is an important emerging Cyber-Physical System. As the demand on Internet services drastically increases in recent years, the power used by IDCs has been skyrocketing. While most existing research focuses on reducing power consumptions of IDCs, the power management problem for minimizing the total electricity cost has been overlooked. This is an important problem faced by service providers, especially in the current multi-electricity market, where the price of electricity may exhibit time and location diversities. Further, for these service providers, guaranteeing quality of service (i.e. service level objectives-SLO) such as service delay guarantees to the end users is of paramount importance. This paper studies the problem of minimizing the total electricity cost under multiple electricity markets environment while guaranteeing quality of service geared to the location diversity and time diversity of electricity price. We model the problem as a constrained mixed-integer programming and propose an efficient solution method. Extensive evaluations based on reallife electricity price data for multiple IDC locations illustrate the efficiency and efficacy of our approach. I.
EnerJ: Approximate Data Types for Safe and General Low-Power Computation
"... Energy is increasingly a first-order concern in computer systems. Exploiting energy-accuracy trade-offs is an attractive choice in applications that can tolerate inaccuracies. Recent work has explored exposing this trade-off in programming models. A key challenge, though, is how to isolate parts of ..."
Abstract
-
Cited by 93 (18 self)
- Add to MetaCart
(Show Context)
Energy is increasingly a first-order concern in computer systems. Exploiting energy-accuracy trade-offs is an attractive choice in applications that can tolerate inaccuracies. Recent work has explored exposing this trade-off in programming models. A key challenge, though, is how to isolate parts of the program that must be precise from those that can be approximated so that a program functions correctly even as quality of service degrades. We propose using type qualifiers to declare data that may be subject to approximate computation. Using these types, the system automatically maps approximate variables to low-power storage, uses low-power operations, and even applies more energy-efficient algorithms provided by the programmer. In addition, the system can statically guarantee isolation of the precise program component from the approximate component. This allows a programmer to control explicitly how information flows from approximate data to precise data. Importantly, employing static analysis eliminates the need for dynamic checks, further improving energy savings. As a proof of concept, we develop EnerJ, an extension to Java that adds approximate data types. We also propose a hardware architecture that offers explicit approximate storage and computation. We port several applications to EnerJ and show that our extensions are expressive and effective; a small number of annotations lead to significant potential energy savings (10%–50%) at very little accuracy cost.
Understanding and Designing New Server Architectures for Emerging Warehouse-Computing Environments
- INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE
, 2008
"... This paper seeks to understand and design nextgeneration servers for emerging “warehouse-computing” environments. We make two key contributions. First, we put together a detailed evaluation infrastructure including a new benchmark suite for warehouse-computing workloads, and detailed performance, co ..."
Abstract
-
Cited by 92 (13 self)
- Add to MetaCart
(Show Context)
This paper seeks to understand and design nextgeneration servers for emerging “warehouse-computing” environments. We make two key contributions. First, we put together a detailed evaluation infrastructure including a new benchmark suite for warehouse-computing workloads, and detailed performance, cost, and power models, to quantitatively characterize bottlenecks. Second, we study a new solution that incorporates volume nonserver-class components in novel packaging solutions, with memory sharing and flash-based disk caching. Our results show that this approach has promise, with a 2X improvement on average in performance-perdollar for our benchmark suite.
Cluster-level feedback power control for performance optimization
- In HPCA
, 2008
"... Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operation costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing highdensity. Cont ..."
Abstract
-
Cited by 89 (24 self)
- Add to MetaCart
(Show Context)
Power control is becoming a key challenge for effectively operating a modern data center. In addition to reducing operation costs, precisely controlling power consumption is an essential way to avoid system failures caused by power capacity overload or overheating due to increasing highdensity. Control-theoretic techniques have recently shown a lot of promise on power management thanks to their better control performance and theoretical guarantees on control accuracy and system stability. However, existing work oversimplifies the problem by controlling a single server independently from others. As a result, at the cluster level where multiple servers are correlated by common workloads and share common power supplies, power cannot be shared to improve application performance. In this paper, we propose a cluster-level power controller that shifts power among servers based on their performance needs, while controlling the total power of the cluster to be lower than a constraint. Our controller features a rigorous design based on an optimal multi-input-multi-output control theory. Empirical results demonstrate that our controller outperforms two stateof-the-art controllers, by having better application performance and more accurate power control. 1