Results 1  10
of
543
EnergyAware Wireless Microsensor Networks
 IEEE Signal Processing Magazine
, 2002
"... This article describes architectural and algorithmic approaches that designers can use to enhance the energy awareness of wireless sensor networks. The article starts off with an analysis of the power consumption characteristics of typical sensor node architectures and identifies the various factors ..."
Abstract

Cited by 286 (1 self)
 Add to MetaCart
This article describes architectural and algorithmic approaches that designers can use to enhance the energy awareness of wireless sensor networks. The article starts off with an analysis of the power consumption characteristics of typical sensor node architectures and identifies the various factors that affect system lifetime. We then present a suite of techniques that perform aggressive energy optimization while targeting all stages of sensor network design, from individual nodes to the entire network. Maximizing network lifetime requires the use of a wellstructured design methodology, which enables energy aware design and operation of all aspects of the sensor network, from the underlying hardware platform to the application software and network protocols. Adopting such a holistic approach ensures that energy awareness is incorporated not only into individual sensor nodes but also into groups of communicating nodes and the entire sensor network. By following an energyaware design methodology based on techniques such as in this article, designers can enhance network lifetime by orders of magnitude.
ECOSystem: Managing Energy as a First Class Operating System Resource
, 2002
"... Energy consumption has recently been widely recognized as a major challenge of computer systems design. This paper explores how to support energy as a firstclass operating system resource. Energy, because of its global system nature, presents challenges beyond those of conventional resource managem ..."
Abstract

Cited by 220 (5 self)
 Add to MetaCart
(Show Context)
Energy consumption has recently been widely recognized as a major challenge of computer systems design. This paper explores how to support energy as a firstclass operating system resource. Energy, because of its global system nature, presents challenges beyond those of conventional resource management. To meet these challenges we propose the Currentcy Model that unifies energy accounting over diverse hardware components and enables fair allocation of available energy among applications. Our particular goal is to extend battery lifetime by limiting the average discharge rate and to share this limited resource among competing tasks according to user preferences. To demonstrate how our framework supports explicit control over the battery resource we implemented ECOSystem, a modified Linux, that incorporates our currentcy model. Experimental results show that ECOSystem accurately accounts for the energy consumed by asynchronous device operation, can achieve a target battery lifetime, and proportionally shares the limited energy resource among competing tasks.
Dynamic and Aggressive Scheduling Techniques for PowerAware RealTime Systems
, 2001
"... In this paper, we address poweraware scheduling of periodic hard realtime tasks using dynamic voltage scaling. Our solution includes three parts: (a) a static (offline) solution to compute the optimal speed, assuming worstcase workload for each arrival, (b) an online speed reduction mechanism t ..."
Abstract

Cited by 204 (25 self)
 Add to MetaCart
(Show Context)
In this paper, we address poweraware scheduling of periodic hard realtime tasks using dynamic voltage scaling. Our solution includes three parts: (a) a static (offline) solution to compute the optimal speed, assuming worstcase workload for each arrival, (b) an online speed reduction mechanism to reclaim energy by adapting to the actual workload, and (c) an online, adaptive and speculative speed adjustment mechanism to anticipate early completions of future executions by using the averagecase workload information. All these solutions still guarantee that all deadlines are met. Our simulation results show that the reclaiming algorithm saves a striking 50% of the energy over the static algorithm. Further, our speculative techniques allow for an additional approximately 20% savings over the reclaiming algorithm. In this study, we also establish that solving an instance of the static poweraware scheduling problem is equivalent to solving an instance of the rewardbased scheduling problem [1, 4] with concave reward functions. 1
Power Conscious Fixed Priority Scheduling for Hard RealTime Systems
, 1999
"... Power efficient design of realtime systems based on programmable processors becomes more important as system functionality is increasingly realized through software. This paper presents a powerefficient version of a widely used fixed priority scheduling method. The method yields a power reduction b ..."
Abstract

Cited by 200 (5 self)
 Add to MetaCart
(Show Context)
Power efficient design of realtime systems based on programmable processors becomes more important as system functionality is increasingly realized through software. This paper presents a powerefficient version of a widely used fixed priority scheduling method. The method yields a power reduction by exploiting slack times, both those inherent in the system schedule and those arising from variations of execution times. The proposed runtime mechanism is simple enough to be implemented in most kernels. Experimental results show that the proposed scheduling method obtains a significant power reduction across several kinds of applications.
Improving Dynamic Voltage Scaling Algorithms with PACE
, 2001
"... This paper addresses algorithms for dynamically varying (scaling) CPU speed and voltage in order to save energy. Such scaling is useful and effective when it is immaterial when a task completes, as long as it meets some deadline. We show how to modify any scaling algorithm to keep performance the sa ..."
Abstract

Cited by 173 (2 self)
 Add to MetaCart
(Show Context)
This paper addresses algorithms for dynamically varying (scaling) CPU speed and voltage in order to save energy. Such scaling is useful and effective when it is immaterial when a task completes, as long as it meets some deadline. We show how to modify any scaling algorithm to keep performance the same but minimize expected energy consumption. We refer to our approach as PACE (Processor Acceleration to Conserve Energy) since the resulting schedule increases speed as the task progresses. Since PACE depends on the probability distribution of the task's work requirement, we present methods for estimating this distribution and evaluate these methods on a variety of real workloads. We also show how to approximate the optimal schedule with one that changes speed a limited number of times. Using PACE causes very little additional overhead, and yields substantial reductions in CPU energy consumption. Simulations using real workloads show it reduces the CPU energy consumption of previously published algorithms by up to 49.5%, with an average of 20.6%, without any effect on performance.
Speed scaling to manage energy and temperature
 Journal of the ACM
"... We first consider online speed scaling algorithms to minimize the energy used subject to the constraint that every job finishes by its deadline. We assume that the power required to run at speed s is P s s. We provide a tight bound on the competitive ratio of the previously proposed Optimal A ..."
Abstract

Cited by 169 (17 self)
 Add to MetaCart
(Show Context)
We first consider online speed scaling algorithms to minimize the energy used subject to the constraint that every job finishes by its deadline. We assume that the power required to run at speed s is P s s. We provide a tight bound on the competitive ratio of the previously proposed Optimal Available algorithm. This improves the best known competitive ratio by a factor of . We then introduce a new online algorithm, and show that this algorithm’s competitive ratio is at most e. This competitive ratio is significantly better and is approximately e for large . Our result is essentially tight for large . In particular, as approaches infinity, we show that any algorithm must have competitive ratio e (up to lower order terms). We then turn to the problem of dynamic speed scaling to minimize the maximum temperature that the device ever reaches, again subject to the constraint that all jobs finish by their deadlines. We assume that the device cools according to Fourier’s law. We show how to solve this problem in polynomial time, within any error bound, using the Ellipsoid algorithm. 1.
Power optimization of realtime embedded systems on variable speed processors
, 2000
"... Power eficient design of realtime embedded systems based on programmable processors becomes more important as system functionality is increasingly realized through software. This paperpresents a power optimization method for realtime embedded applications on a variable speed processor: The method ..."
Abstract

Cited by 129 (0 self)
 Add to MetaCart
Power eficient design of realtime embedded systems based on programmable processors becomes more important as system functionality is increasingly realized through software. This paperpresents a power optimization method for realtime embedded applications on a variable speed processor: The method combines offline and online components. The offline component determines the lowest possible maximum processor speed while guaranteeing deadlines of all tasks. The online component dynamically varies the processor speed or bring a processor into a powerdown mode according to the status of task set in order to exploit execution time variations and idle intervals. Experimental results show that the proposed method obtains a signijicant power reduction across several kinds of applications. 1
Algorithms for power savings
 In SODA ’03: Proceedings of the fourteenth annual ACMSIAM symposium on Discrete algorithms
, 2003
"... This paper examines two di erent mechanisms for saving power in batteryoperated embedded systems. The rst is that the system can be placed in a sleep state if it is idle. However, a xed amount of energy is required to bring the system back into an active state in which it can resume work. The secon ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
(Show Context)
This paper examines two di erent mechanisms for saving power in batteryoperated embedded systems. The rst is that the system can be placed in a sleep state if it is idle. However, a xed amount of energy is required to bring the system back into an active state in which it can resume work. The second way inwhichpower savings can be achieved is by varying the speed at which jobs are run. We utilize a power consumption curve P (s) whichindicates the power consumption level given a particular speed. We assume that P (s) isconvex, nondecreasing and nonnegative for s 0. The problem is to schedule arriving jobs in a way that minimizes total energy use and so that each job is completed after its release time and before its deadline. We assume that all jobs can be preempted and resumed at no cost. Although each problem has been considered separately, this is the rst theoretical analysis of systems that can use both mechanisms. We givean o ine algorithm that is within a factor of two of the optimal algorithm. We alsogivean online algorithm with a constant competitive ratio. 1
Synthesis Techniques for LowPower Hard RealTime Systems on Variable Voltage Processors
 In Proceedings of IEEE RealTime Systems Symposium (RTSS’98
, 1998
"... ..."
The Design, Implementation, and Evaluation of a Compiler Algorithm for CPU Energy Reduction
 IN PROCEEDINGS OF ACM SIGPLAN CONFERENCE ON PROGRAMMING LANGUAGE DESIGN AND IMPLEMENTATION
, 2003
"... This paper presents the design and implementation of a compiler algorithm that effectively optimizes programs for energy usage using dynamic voltage scaling (DVS). The algorithm identifies program regions where the CPU can be slowed down with negligible performance loss. It is implemented as a sourc ..."
Abstract

Cited by 124 (7 self)
 Add to MetaCart
This paper presents the design and implementation of a compiler algorithm that effectively optimizes programs for energy usage using dynamic voltage scaling (DVS). The algorithm identifies program regions where the CPU can be slowed down with negligible performance loss. It is implemented as a sourcetosource level transformation using the SUIF2 compiler infrastructure. Physical measurements on a highperformance laptop show that total system (i.e., laptop) energy savings of up to 28% can be achieved with performance degradation of less than 5% for the SPECfp95 benchmarks. On average, the system energy and energydelay product are reduced by 11% and 9%, respectively, with a performance slowdown of 2%. It was also discovered that the energy usage of the programs using our DVS algorithm is within 6% from the theoretical lower bound. To the best of our knowledge, this is one of the first work that evaluates DVS algorithms by physical measurements.