Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach

Reducing the energy consumption of the servers in a data center via proper job allocation is desirable. Existing advanced job allocation algorithms, based on constrained optimization formulations capturing servers’ complex power consumption and thermal dynamics, often scale poorly with the data c...

Full description

Saved in:
Bibliographic Details
Main Author: Yi, Deliang
Other Authors: Wen Yonggang
Format: Theses and Dissertations
Language:English
Published: 2019
Subjects:
Online Access:https://hdl.handle.net/10356/104419
http://hdl.handle.net/10220/50011
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-104419
record_format dspace
spelling sg-ntu-dr.10356-1044192020-10-28T08:29:19Z Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach Yi, Deliang Wen Yonggang School of Computer Science and Engineering Engineering::Computer science and engineering Reducing the energy consumption of the servers in a data center via proper job allocation is desirable. Existing advanced job allocation algorithms, based on constrained optimization formulations capturing servers’ complex power consumption and thermal dynamics, often scale poorly with the data center size and optimization horizon. This paper applies deep reinforcement learning to build an allocation algorithm for long-lasting and compute-intensive jobs that are increasingly seen among today’s computation demands. Specifically, a deep Q-network is trained to allocate jobs, aiming to maximize a cumulative reward over long horizons. The training is performed offline using a computational model based on long short-term memory networks that capture the servers’ power and thermal dynamics. This offline training approach avoids slow online convergence, low energy efficiency, and potential server overheating during the agent’s extensive state-action space exploration if it directly interacts with the physical data center in the usually adopted online learning scheme. At run time, the trained Q-network is forward-propagated with little computation to allocate jobs. Evaluation based on eight months’ physical state and job arrival records from a national supercomputing data center hosting 1,152 processors shows that our solution reduces computing power consumption by more than 10% and processor temperature by more than 4°C without sacrificing job processing throughput. Master of Engineering 2019-09-26T00:57:56Z 2019-12-06T21:32:21Z 2019-09-26T00:57:56Z 2019-12-06T21:32:21Z 2019 Thesis Yi, D. (2019). Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/104419 http://hdl.handle.net/10220/50011 10.32657/10356/104419 en 52 p. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
spellingShingle Engineering::Computer science and engineering
Yi, Deliang
Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
description Reducing the energy consumption of the servers in a data center via proper job allocation is desirable. Existing advanced job allocation algorithms, based on constrained optimization formulations capturing servers’ complex power consumption and thermal dynamics, often scale poorly with the data center size and optimization horizon. This paper applies deep reinforcement learning to build an allocation algorithm for long-lasting and compute-intensive jobs that are increasingly seen among today’s computation demands. Specifically, a deep Q-network is trained to allocate jobs, aiming to maximize a cumulative reward over long horizons. The training is performed offline using a computational model based on long short-term memory networks that capture the servers’ power and thermal dynamics. This offline training approach avoids slow online convergence, low energy efficiency, and potential server overheating during the agent’s extensive state-action space exploration if it directly interacts with the physical data center in the usually adopted online learning scheme. At run time, the trained Q-network is forward-propagated with little computation to allocate jobs. Evaluation based on eight months’ physical state and job arrival records from a national supercomputing data center hosting 1,152 processors shows that our solution reduces computing power consumption by more than 10% and processor temperature by more than 4°C without sacrificing job processing throughput.
author2 Wen Yonggang
author_facet Wen Yonggang
Yi, Deliang
format Theses and Dissertations
author Yi, Deliang
author_sort Yi, Deliang
title Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
title_short Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
title_full Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
title_fullStr Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
title_full_unstemmed Toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
title_sort toward efficient compute-intensive job allocation for green data centers : a deep reinforcement learning approach
publishDate 2019
url https://hdl.handle.net/10356/104419
http://hdl.handle.net/10220/50011
_version_ 1683494325265104896