Green data analytics of supercomputing from massive sensor networks: Does workload distribution matter?
Energy costs represent a significant share of the total cost of ownership in high performance computing (HPC) systems. Using a unique data set collected by massive sensor networks in a peta scale national supercomputing center, we first present an explanatory model to identify key factors that affec...
Saved in:
Main Authors: | , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2023
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/7813 https://ink.library.smu.edu.sg/context/sis_research/article/8816/viewcontent/GreenDataAnalytics_av.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Energy costs represent a significant share of the total cost of ownership in high performance computing (HPC) systems. Using a unique data set collected by massive sensor networks in a peta scale national supercomputing center, we first present an explanatory model to identify key factors that affect energy consumption in supercomputing. Our analytic results show that, not only does computing node utilization significantly affect energy consumption, workload distribution among the nodes also has significant effects and could effectively be leveraged to improve energy efficiency. Next, we establish the high model performance using in-sample and out-of-sample analyses. We then develop prescriptive models for energy-optimal runtime workload management and extend the models to consider energy consumption and job performance tradeoffs. Specifically, we present four dynamic resource management methodologies (packing, load balancing, threshold-based switching, and energy optimization), model their application at two levels (purely within-rack and jointly cross-rack resource allocation), and explore runtime resource redistribution policies for jobs under the emergent principle of computational steering and comparatively evaluate strategies that use computational steering with those that do not. Our experimental studies show that packing is preferred when the total workload of a rack is higher than a threshold and load balancing is preferred when it is lower. These results lead to a threshold strategy that yields near-optimal energy efficiency under all workload conditions. We further calibrate the energy-optimal resource allocations over the full range of workloads and present a bicriteria evaluation to consider energy consumption and job performance tradeoffs. We demonstrate significant energy savings of our proposed strategies under various workload conditions. We conclude with implementation guidelines and policy insights into energy efficient computing resource management in large supercomputing data centers. |
---|