Skip to main content


Hardware Architecture of Reinforcement Learning Scheme for Dynamic Power Management in Embedded Systems


Dynamic power management (DPM) is a technique to reduce power consumption of electronic systems by selectively shutting down idle components. In this paper, a novel and nontrivial enhancement of conventional reinforcement learning (RL) is adopted to choose the optimal policy out of the existing DPM policies. A hardware architecture evolved from the VHDL model of Temporal Difference RL algorithm is proposed in this paper, which can suggest the winner policy to be adopted for any given workload to achieve power savings. The effectiveness of this approach is also demonstrated by an event-driven simulator, which is designed using JAVA for power-manageable embedded devices. The results show that RL applied to DPM can lead up to 28% power savings.

[1 2 3 4 5 6 7 8 9 10 11]


  1. 1.

    Irani S, Shukala S, Gupta R: Competitive analysis of dynamic power management strategies for systems with multiple power savings states. In Tech. Rep. 01-50. University of Irvine, Irvine, Calif, USA; 2001.

  2. 2.

    Benini L, Bogliolo A, Paleologo GA, de Micheli G: Policy optimization for dynamic power management. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 1999,18(6):813-833. 10.1109/43.766730

  3. 3.

    Lu Y-H, Simunic T, de Micheli G: Software controlled power management. Proceedings of the 7th International Workshop on Hardware/Software Codesign (CODES '99), May 1999, Rome, Italy 157-161.

  4. 4.

    Qiu Q, Pedram M: Dynamic power management based on continuous-time Markov decision processes. Proceedings of the 36th Annual Design Automation Conference (DAC '99), June 1999, New Orleans, La, USA 555-561.

  5. 5.

    Lu Y-H, de Micheli G: Comparing system-level power management policies. IEEE Design and Test of Computers 2001,18(2):10-19. 10.1109/54.914592

  6. 6.

    Shukla SK, Gupta RK: A model checking approach to evaluating system level dynamic power management policies for embedded systems. Proceedings of the 6th IEEE International High-Level Design Validation and Test Workshop, September 2001, Monterey, Calif, USA 53-57.

  7. 7.

    Watts C, Ambatipudi R: Dynamic energy management in embedded systems. Computing & Control Engineering 2003,14(5):36-40. 10.1049/cce:20030508

  8. 8.

    Chung E-Y, Benini L, de Micheli G: Dynamic power management using adaptive learning tree. Proceedings of the IEEE/ACM International Conference on Computer-Aided Design (ICCAD '99), November 1999, San Jose, Calif, USA 274-279.

  9. 9.

    Sutton RS, Barto AG: Reinforcement Learning: An Introduction. MIT Press, Cambridge, UK; 1998.

  10. 10.

    Ribeiro CHC: A tutorial on reinforcement learning techniques. In Proceedings of International Conference on Neural Networks, July 1999, Washington, DC, USA. INNS Press;

  11. 11.

    Johnson RA: Probability and Statistics for Engineers. Prentice-Hall, Englewood Cliffs, NJ, USA; 2001.

Download references

Author information

Correspondence to Viswanathan Lakshmi Prabha.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and Permissions

About this article

Cite this article

Prabha, V.L., Monie, E.C. Hardware Architecture of Reinforcement Learning Scheme for Dynamic Power Management in Embedded Systems. J Embedded Systems 2007, 065478 (2007).

Download citation


  • Optimal Policy
  • Reinforcement Learning
  • Electronic Circuit
  • Power Saving
  • Hardware Architecture