BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING

Based on a survey conducted by Ericsson in November 2019, there were 5.9 billion cellular phone users with cellular networks expected to grow 27% annually between 2019-2025. The development of the use of cellular networks will not be separated from the desire of users to get better network qualit...

Full description

Saved in:
Bibliographic Details
Main Author: Athallariq Harya P, Danendra
Format: Final Project
Language:Indonesia
Online Access:https://digilib.itb.ac.id/gdl/view/65779
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Institut Teknologi Bandung
Language: Indonesia
id id-itb.:65779
institution Institut Teknologi Bandung
building Institut Teknologi Bandung Library
continent Asia
country Indonesia
Indonesia
content_provider Institut Teknologi Bandung
collection Digital ITB
language Indonesia
description Based on a survey conducted by Ericsson in November 2019, there were 5.9 billion cellular phone users with cellular networks expected to grow 27% annually between 2019-2025. The development of the use of cellular networks will not be separated from the desire of users to get better network quality in the future. One of the things that can make a cellular network in an area good or not is whether the BTS in that area is ready to serve user throughput, both downlink and uplink in the area. BTS readiness is influenced by how well the network capacity is planned so that no user feels hampered in using the cellular network. In addition, good network capacity planning also avoids the use of excessive costs due to the assumption that user throughput is too high in the area. For this reason, good network capacity planning will be helped by how well the network administrator can predict the user throughput of a BTS in the future. This prediction can be made by a pre-trained deep learning model using historical user throughput data on a BTS. Each BTS has its own characteristics, therefore the deep learning model can only predict the user throughput of the BTS that has been trained previously. With so many existing BTS, it is impossible to do the model training process one by one manually, so we need a framework that can carry out the model training process from the beginning until it can predict the user throughput of the BTS in the future. The framework developed in this final project will be in the form of a website where users can enter their user throughput data and choose the model training flow from start to finish. Before the model training website was developed, the author first tested the deep learning model to find out what was needed to make a model better. The tests carried out consisted of testing data normalization, dataset windowing, feature selection, and model architecture. During the testing process, Mean AbsoluteError (MAE) will be used to assess how well the trained model performs. The better the model, the lower the MAE produced. First, for testing data normalization, a feedforward neural network (FNN) model with univariable data is used. Univariable data means that only user throughput is used to predict future user throughput. In this first test, it was found that the normalized data had better results than the original data. Subsequent testing is carried out using the FNN model and also univariable data as in the previous test. In this test, it was found that the number of time steps used was not directly proportional to the performance of the model. So it takes testing one by one to see which time step produces the best model. The results of the windowing dataset test will be used in the next test, namely feature selection. The selection of features was tested using the same FNN model, but in contrast to the 4 previous two tests, multivariable data will be used in this test. In multivariable data, other BTS features will be used besides user throughput to predict user throughput in the future. In this feature selection, many methods are used to determine the right number of features for model training. The feature selection methods used include univariate testing, feature selection using machine learning models, recursive feature elimination (RFE), and principal component analysis (PCA). From the tests carried out, all methods generally produce results that are not much different and the number of features used is not correlated with the performance of the model. This means that the many features used do not guarantee the model's performance will improve, so it is necessary to test one by one until the best model is obtained. The final test will be carried out by comparing the FNN model with the long short term memory (LSTM) model. Both models have their respective advantages at different user throughput. All of these tests can also be performed by the user on the developed website so that the user can compare which treatment has the best effect on the model.
format Final Project
author Athallariq Harya P, Danendra
spellingShingle Athallariq Harya P, Danendra
BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING
author_facet Athallariq Harya P, Danendra
author_sort Athallariq Harya P, Danendra
title BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING
title_short BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING
title_full BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING
title_fullStr BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING
title_full_unstemmed BASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING
title_sort base transceiver station user throughput average prediction using deep learning
url https://digilib.itb.ac.id/gdl/view/65779
_version_ 1822004951800348672
spelling id-itb.:657792022-06-24T15:42:42ZBASE TRANSCEIVER STATION USER THROUGHPUT AVERAGE PREDICTION USING DEEP LEARNING Athallariq Harya P, Danendra Indonesia Final Project Feedforward Neural Network, Long-short Term Memory, Mean Absolute Error. INSTITUT TEKNOLOGI BANDUNG https://digilib.itb.ac.id/gdl/view/65779 Based on a survey conducted by Ericsson in November 2019, there were 5.9 billion cellular phone users with cellular networks expected to grow 27% annually between 2019-2025. The development of the use of cellular networks will not be separated from the desire of users to get better network quality in the future. One of the things that can make a cellular network in an area good or not is whether the BTS in that area is ready to serve user throughput, both downlink and uplink in the area. BTS readiness is influenced by how well the network capacity is planned so that no user feels hampered in using the cellular network. In addition, good network capacity planning also avoids the use of excessive costs due to the assumption that user throughput is too high in the area. For this reason, good network capacity planning will be helped by how well the network administrator can predict the user throughput of a BTS in the future. This prediction can be made by a pre-trained deep learning model using historical user throughput data on a BTS. Each BTS has its own characteristics, therefore the deep learning model can only predict the user throughput of the BTS that has been trained previously. With so many existing BTS, it is impossible to do the model training process one by one manually, so we need a framework that can carry out the model training process from the beginning until it can predict the user throughput of the BTS in the future. The framework developed in this final project will be in the form of a website where users can enter their user throughput data and choose the model training flow from start to finish. Before the model training website was developed, the author first tested the deep learning model to find out what was needed to make a model better. The tests carried out consisted of testing data normalization, dataset windowing, feature selection, and model architecture. During the testing process, Mean AbsoluteError (MAE) will be used to assess how well the trained model performs. The better the model, the lower the MAE produced. First, for testing data normalization, a feedforward neural network (FNN) model with univariable data is used. Univariable data means that only user throughput is used to predict future user throughput. In this first test, it was found that the normalized data had better results than the original data. Subsequent testing is carried out using the FNN model and also univariable data as in the previous test. In this test, it was found that the number of time steps used was not directly proportional to the performance of the model. So it takes testing one by one to see which time step produces the best model. The results of the windowing dataset test will be used in the next test, namely feature selection. The selection of features was tested using the same FNN model, but in contrast to the 4 previous two tests, multivariable data will be used in this test. In multivariable data, other BTS features will be used besides user throughput to predict user throughput in the future. In this feature selection, many methods are used to determine the right number of features for model training. The feature selection methods used include univariate testing, feature selection using machine learning models, recursive feature elimination (RFE), and principal component analysis (PCA). From the tests carried out, all methods generally produce results that are not much different and the number of features used is not correlated with the performance of the model. This means that the many features used do not guarantee the model's performance will improve, so it is necessary to test one by one until the best model is obtained. The final test will be carried out by comparing the FNN model with the long short term memory (LSTM) model. Both models have their respective advantages at different user throughput. All of these tests can also be performed by the user on the developed website so that the user can compare which treatment has the best effect on the model. text