Resource allocation in cloud gaming
Cloud gaming services enable users with heterogeneous device capabilities to get access to game titles with high hardware resource demands. Cloud gaming transforms the traditional gaming system by migrating most of the game components, including assets and game logics, into remote cloud servers know...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Thesis-Doctor of Philosophy |
Language: | English |
Published: |
Nanyang Technological University
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/168658 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
id |
sg-ntu-dr.10356-168658 |
---|---|
record_format |
dspace |
institution |
Nanyang Technological University |
building |
NTU Library |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
NTU Library |
collection |
DR-NTU |
language |
English |
topic |
Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems |
spellingShingle |
Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Jaya, Iryanto Resource allocation in cloud gaming |
description |
Cloud gaming services enable users with heterogeneous device capabilities to get access to game titles with high hardware resource demands. Cloud gaming transforms the traditional gaming system by migrating most of the game components, including assets and game logics, into remote cloud servers known as rendering servers (RSes) such that the game itself is a black box in which players are only required to send game inputs and receive video streams. By having players' devices only responsible for input and output actions, hardware requirements of games become less relevant to the players.
Despite its benefits, there are significant challenges to have a cloud gaming system ready for mass market. First of all, latency is one of the major problems which arises as most of the tasks are offloaded remotely. In order to have acceptable game quality, latency must be kept below a certain threshold. Furthermore, due to latency and bandwidth constraints, maintaining gameplay quality regarding responsiveness and visual quality becomes challenging. Finally, cloud gaming service providers have to pay certain amount of cost to run the RSes in order to serve their players and meet quality of service (QoS) and quality of experience (QoE) requirements.
In this thesis, we are focusing on two main objectives which are cost optimization from cloud gaming service provider’s point of view and gameplay quality from players’ point of view. Cost can be minimized if players can fit into as few RSes as possible. However, it becomes challenging when players have latency requirements which only allow them to connect to a smaller set of RSes. Furthermore, this latency issue has several impacts on QoS and quality of QoE.
Although there are a lot of works in cloud gaming, cost optimization in multiplayer cloud gaming has not been extensively studied. Most published works focused on single player cloud gaming scenario in which players are independent from each other. In the case where multiple players play in the same scene (e.g., as in massively multiplayer online role-playing game (MMORPG)), rendering workload can be shared. An intermediate rendering process for one player can be used for other players in order to reduce the workload in the RS. Hence, the amount of resource consumption in each server will not increase linearly as number of players. To take advantage of rendering workload sharing and achieve cost minimization, we propose a cloud gaming architecture for MMORPG which allows players co-existing in the same virtual space to share their rendering workload. This is realised by separating the rendering pipeline into two stages which are view independent and view dependent rendering. To minimize the total rendering server rental cost, several heuristics for online rendering server allocation are also proposed. In order to evaluate their performance, the online heuristics is evaluated against an offline lower bound
On the other hand, from players' perspective, latency is a very crucial aspect. Especially in MMORPG in which interactions are very frequent. The latency requirement may prevents some players who are located in remote regions from playing the game. To enlarge playerbase, we further extend our architecture to employ lower capacity edge RSes which are more geographically distributed. Furthermore, we also allow workload splitting of foreground (FG) and background (BG) rendering between edge and cloud RSes to ease the burden of each individual RS with a trade-off between cost and player-based coverage. FG objects are rendered in edge RS for more responsiveness while BG objects are rendered in cloud RS. To take advantage of rendering workload splitting and to increase the playerbase coverage, an online allocation technique named bandit domain specific prioritization (BDSP) which makes use of the information of latency between nodes for giving priorities towards players with less possible RS connections is also proposed. Based on our experiments, the extended architecture and BDSP results in reduction of player request rejections for up to 28\% compared to traditional cloud gaming approach.
To reduce the overall resource cost while increasing player's QoE, we further improve the edge-cloud gaming architecture to take advantages of both workload sharing of view-independent rendering and splitting of foreground and background rendering. A learning-based domain specific prioritization (LBDSP) algorithm is proposed for rendering server allocation with the capability of having both offline learning and online allocation at the same time. LBDSP is able to balance the two metrics of cost and playerbase coverage. Our experiments demonstrate that the further improved architecture has higher playerbase coverage while LBDSP allocation algorithm significantly reduces the cost in both single and batch player arrival patterns.
Although LBDSP is able to strike a balance between cost and playerbase coverage, it is not time efficient as its offline learning component takes a long time to compute. Finally, a deep reinforcement learning (DRL) approach is proposed to tackle this shortcoming of LBDSP to improve time efficiency, which is adaptable and does not require huge amount of storage to store the state space and experience replay. Our main goal of using DRL is to make the allocation algorithm scalable and independent from the player volume in the system. Furthermore, to capture more information our proposed DRL technique considers two inputs streams which are the current system state and the arriving play request. Our experimental results show that resource allocation using DRL is scalable over increasing player volume and has a good balance between cost and playerbase coverage. |
author2 |
Cai Wentong |
author_facet |
Cai Wentong Jaya, Iryanto |
format |
Thesis-Doctor of Philosophy |
author |
Jaya, Iryanto |
author_sort |
Jaya, Iryanto |
title |
Resource allocation in cloud gaming |
title_short |
Resource allocation in cloud gaming |
title_full |
Resource allocation in cloud gaming |
title_fullStr |
Resource allocation in cloud gaming |
title_full_unstemmed |
Resource allocation in cloud gaming |
title_sort |
resource allocation in cloud gaming |
publisher |
Nanyang Technological University |
publishDate |
2023 |
url |
https://hdl.handle.net/10356/168658 |
_version_ |
1772828455086325760 |
spelling |
sg-ntu-dr.10356-1686582023-07-04T01:52:13Z Resource allocation in cloud gaming Jaya, Iryanto Cai Wentong School of Computer Science and Engineering ASWTCAI@ntu.edu.sg Engineering::Computer science and engineering::Computer systems organization::Special-purpose and application-based systems Cloud gaming services enable users with heterogeneous device capabilities to get access to game titles with high hardware resource demands. Cloud gaming transforms the traditional gaming system by migrating most of the game components, including assets and game logics, into remote cloud servers known as rendering servers (RSes) such that the game itself is a black box in which players are only required to send game inputs and receive video streams. By having players' devices only responsible for input and output actions, hardware requirements of games become less relevant to the players. Despite its benefits, there are significant challenges to have a cloud gaming system ready for mass market. First of all, latency is one of the major problems which arises as most of the tasks are offloaded remotely. In order to have acceptable game quality, latency must be kept below a certain threshold. Furthermore, due to latency and bandwidth constraints, maintaining gameplay quality regarding responsiveness and visual quality becomes challenging. Finally, cloud gaming service providers have to pay certain amount of cost to run the RSes in order to serve their players and meet quality of service (QoS) and quality of experience (QoE) requirements. In this thesis, we are focusing on two main objectives which are cost optimization from cloud gaming service provider’s point of view and gameplay quality from players’ point of view. Cost can be minimized if players can fit into as few RSes as possible. However, it becomes challenging when players have latency requirements which only allow them to connect to a smaller set of RSes. Furthermore, this latency issue has several impacts on QoS and quality of QoE. Although there are a lot of works in cloud gaming, cost optimization in multiplayer cloud gaming has not been extensively studied. Most published works focused on single player cloud gaming scenario in which players are independent from each other. In the case where multiple players play in the same scene (e.g., as in massively multiplayer online role-playing game (MMORPG)), rendering workload can be shared. An intermediate rendering process for one player can be used for other players in order to reduce the workload in the RS. Hence, the amount of resource consumption in each server will not increase linearly as number of players. To take advantage of rendering workload sharing and achieve cost minimization, we propose a cloud gaming architecture for MMORPG which allows players co-existing in the same virtual space to share their rendering workload. This is realised by separating the rendering pipeline into two stages which are view independent and view dependent rendering. To minimize the total rendering server rental cost, several heuristics for online rendering server allocation are also proposed. In order to evaluate their performance, the online heuristics is evaluated against an offline lower bound On the other hand, from players' perspective, latency is a very crucial aspect. Especially in MMORPG in which interactions are very frequent. The latency requirement may prevents some players who are located in remote regions from playing the game. To enlarge playerbase, we further extend our architecture to employ lower capacity edge RSes which are more geographically distributed. Furthermore, we also allow workload splitting of foreground (FG) and background (BG) rendering between edge and cloud RSes to ease the burden of each individual RS with a trade-off between cost and player-based coverage. FG objects are rendered in edge RS for more responsiveness while BG objects are rendered in cloud RS. To take advantage of rendering workload splitting and to increase the playerbase coverage, an online allocation technique named bandit domain specific prioritization (BDSP) which makes use of the information of latency between nodes for giving priorities towards players with less possible RS connections is also proposed. Based on our experiments, the extended architecture and BDSP results in reduction of player request rejections for up to 28\% compared to traditional cloud gaming approach. To reduce the overall resource cost while increasing player's QoE, we further improve the edge-cloud gaming architecture to take advantages of both workload sharing of view-independent rendering and splitting of foreground and background rendering. A learning-based domain specific prioritization (LBDSP) algorithm is proposed for rendering server allocation with the capability of having both offline learning and online allocation at the same time. LBDSP is able to balance the two metrics of cost and playerbase coverage. Our experiments demonstrate that the further improved architecture has higher playerbase coverage while LBDSP allocation algorithm significantly reduces the cost in both single and batch player arrival patterns. Although LBDSP is able to strike a balance between cost and playerbase coverage, it is not time efficient as its offline learning component takes a long time to compute. Finally, a deep reinforcement learning (DRL) approach is proposed to tackle this shortcoming of LBDSP to improve time efficiency, which is adaptable and does not require huge amount of storage to store the state space and experience replay. Our main goal of using DRL is to make the allocation algorithm scalable and independent from the player volume in the system. Furthermore, to capture more information our proposed DRL technique considers two inputs streams which are the current system state and the arriving play request. Our experimental results show that resource allocation using DRL is scalable over increasing player volume and has a good balance between cost and playerbase coverage. Doctor of Philosophy 2023-06-14T07:15:54Z 2023-06-14T07:15:54Z 2023 Thesis-Doctor of Philosophy Jaya, I. (2023). Resource allocation in cloud gaming. Doctoral thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/168658 https://hdl.handle.net/10356/168658 10.32657/10356/168658 en This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License (CC BY-NC 4.0). application/pdf Nanyang Technological University |