Open Access

Hardware Architecture of Reinforcement Learning Scheme for Dynamic Power Management in Embedded Systems

EURASIP Journal on Embedded Systems20072007:065478

https://doi.org/10.1155/2007/65478

Received: 6 July 2006

Accepted: 28 May 2007

Published: 4 July 2007

Abstract

Dynamic power management (DPM) is a technique to reduce power consumption of electronic systems by selectively shutting down idle components. In this paper, a novel and nontrivial enhancement of conventional reinforcement learning (RL) is adopted to choose the optimal policy out of the existing DPM policies. A hardware architecture evolved from the VHDL model of Temporal Difference RL algorithm is proposed in this paper, which can suggest the winner policy to be adopted for any given workload to achieve power savings. The effectiveness of this approach is also demonstrated by an event-driven simulator, which is designed using JAVA for power-manageable embedded devices. The results show that RL applied to DPM can lead up to 28% power savings.

[1 2 3 4 5 6 7 8 9 10 11]

Authors’ Affiliations

(1)
Department of Electronics and Communication Engineering, Government College of Technology
(2)
Thanthai Periyar Government Institute of Technology TPGIT

References

  1. Irani S, Shukala S, Gupta R: Competitive analysis of dynamic power management strategies for systems with multiple power savings states. In Tech. Rep. 01-50. University of Irvine, Irvine, Calif, USA; 2001.Google Scholar
  2. Benini L, Bogliolo A, Paleologo GA, de Micheli G: Policy optimization for dynamic power management. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 1999,18(6):813-833. 10.1109/43.766730View ArticleGoogle Scholar
  3. Lu Y-H, Simunic T, de Micheli G: Software controlled power management. Proceedings of the 7th International Workshop on Hardware/Software Codesign (CODES '99), May 1999, Rome, Italy 157-161.Google Scholar
  4. Qiu Q, Pedram M: Dynamic power management based on continuous-time Markov decision processes. Proceedings of the 36th Annual Design Automation Conference (DAC '99), June 1999, New Orleans, La, USA 555-561.Google Scholar
  5. Lu Y-H, de Micheli G: Comparing system-level power management policies. IEEE Design and Test of Computers 2001,18(2):10-19. 10.1109/54.914592View ArticleGoogle Scholar
  6. Shukla SK, Gupta RK: A model checking approach to evaluating system level dynamic power management policies for embedded systems. Proceedings of the 6th IEEE International High-Level Design Validation and Test Workshop, September 2001, Monterey, Calif, USA 53-57.View ArticleGoogle Scholar
  7. Watts C, Ambatipudi R: Dynamic energy management in embedded systems. Computing & Control Engineering 2003,14(5):36-40. 10.1049/cce:20030508View ArticleGoogle Scholar
  8. Chung E-Y, Benini L, de Micheli G: Dynamic power management using adaptive learning tree. Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD '99), November 1999, San Jose, Calif, USA 274-279.Google Scholar
  9. Sutton RS, Barto AG: Reinforcement Learning: An Introduction. MIT Press, Cambridge, UK; 1998.Google Scholar
  10. Ribeiro CHC: A tutorial on reinforcement learning techniques. In Proceedings of International Conference on Neural Networks, July 1999, Washington, DC, USA. INNS Press;Google Scholar
  11. Johnson RA: Probability and Statistics for Engineers. Prentice-Hall, Englewood Cliffs, NJ, USA; 2001.Google Scholar

Copyright

© Prabha and Monie 2007

This article is published under license to BioMed Central Ltd. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.