Performance Comparison of Neural Network Training Algorithms for Modeling Customer Churn Prediction
Predicting customer churn has become the priority of every telecommunication service provider as the market is becoming more saturated and competitive. This paper presents a comparison of neural network learning algorithms for customer churn prediction. The data set used to train and test the neu...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference or Workshop Item |
Language: | English |
Published: |
2017
|
Subjects: | |
Online Access: | http://eprints.unisza.edu.my/1013/1/FH03-FIK-18-12865.pdf http://eprints.unisza.edu.my/1013/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Universiti Sultan Zainal Abidin |
Language: | English |
Summary: | Predicting customer churn has become the priority of every telecommunication service
provider as the market is becoming more saturated and competitive. This paper presents a
comparison of neural network learning algorithms for customer churn prediction. The data
set used to train and test the neural network algorithms was provided by one of the leading
telecommunication company in Malaysia. The Multilayer Perceptron (MLP) networks are
trained using nine (9) types of learning algorithms, which are Levenberg Marquardt
backpropagation (trainlm), BFGS Quasi-Newton backpropagation (trainbfg), Conjugate
Gradient backpropagation with Fletcher-Reeves Updates (traincgf), Conjugate Gradient
backpropagation with Polak-Ribiere Updates (traincgp), Conjugate Gradient
backpropagation with Powell-Beale Restarts (traincgb), Scaled Conjugate Gradient
backpropagation (trainscg), One Step Secant backpropagation (trainoss), Bayesian
Regularization backpropagation (trainbr), and Resilient backpropagation (trainrp). The
performance of the Neural Network is measured based on the prediction accuracy of the
learning and testing phases. LM learning algorithm is found to be the optimum model of a
neural network model consisting of fourteen input units, one hidden node and one output
node. The best result of the experiment indicated that this model is able to produce the
performance accuracy of 94.82%. |
---|