Department: Computer Science & Engineering
Name: Gaurav Dhiman
Email: gdhiman @ ucsd.edu
Grad Year: 2010
Power consumption is a key issue in the design of computing systems today because of the need to increase battery lifetime in portable systems and decrease thermal hotspots in larger systems. In this poster we present a novel dynamic power management (DPM) technique that employs a set of multiple DPM policies and uses an online learning algorithm to dynamically select the best suited policy at any point of time. The approach makes use of existing DPM policies and guarantees that at any point of time the performance of the scheme is close to that of the best performing policy in the set.
The primary motivation for dynamically selecting amongst multiple DPM policies comes from the observation that no single policy fits perfectly all operating conditions. The existing DPM policies tackle the DPM problem by selecting an appropriate time when the device can be put to sleep. Often this is done after some amount of idle time has lapsed. This timeout can be fixed, adaptive or randomized. While simpler DPM policies like timeout and predictive policies do it heuristically with no performance guarantees, more sophisticated stochastic policies guarantee optimality for stationary workloads. Thus, policies can outperform each other under different workloads and devices. Our approach optimally exploits this fact by performing policy selection using online learning amongst a set of these DPM policies, each of which performs well for a given set of conditions and workloads. Thus, the use of online learning with a carefully selected set of policies presents a novel, adaptive and robust DPM mechanism that can achieve good performance for a wide range of applications. On our experiments on hard disk drive we are able to achieve up to 60% energy savings over the default Windows XP policy while for wireless adapter we achieve 25% energy savings.
<< Back to Posters or Search Results