Zeus: interpretable ML-based job scheduling in GPU datacentres

Hardware accelerators such as GPUs are essential for the development of Deep Learning (DL) models - as their training process is compute-intensive. A growing number of organisations have employed expensive multi-tenant GPU clusters to run distributed DL training jobs. Efficient job schedulers are re...

Full description

Saved in:
Bibliographic Details
Main Author: Amrita, Ravishankar
Other Authors: Zhang Tianwei
Format: Final Year Project
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/156566
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
Description
Summary:Hardware accelerators such as GPUs are essential for the development of Deep Learning (DL) models - as their training process is compute-intensive. A growing number of organisations have employed expensive multi-tenant GPU clusters to run distributed DL training jobs. Efficient job schedulers are required to maximise GPU cluster utilisation and minimise job completion time and operation cost. In this study, we develop Zeus, an interpretable ML-based, non-intrusive job scheduler that ensures resource fairness, thus providing a better user experience. Zeus accommodates the concern of unreliability of black-box Machine Learning (ML) models by being 100% interpretable, thus avoiding any related deployment concerns in practical scenarios. The interpretability of our model helps reveal interesting dependencies between the training job’s details and the expected job duration and associated trends. Further, our scheduler does not require users to make any modifications to the source code or the underlying DL framework, thereby being completely non-intrusive in nature and consequently, more practical. Finally, we use a GPU datacenter simulator to analyse the efficiency of our scheduler in terms of two metrics: (1) Average Job Completion Time and (2) Average Queueing time.