Local and global measures for measuring performance of big data analytics process

One pivotal aspect of big data is the process which handles it, mainly referred to as big data analytics (BDA) process. BDA process is an end-to-end process which consists of several stages including data acquisition, data preparation (integration and pre-processing), data analysis, visualization...

Full description

Saved in:
Bibliographic Details
Main Author: Ali, Ismail Mohamed
Format: Thesis
Language:English
Published: 2019
Subjects:
Online Access:http://psasir.upm.edu.my/id/eprint/90778/1/FSKTM%202020%2010%20IR.pdf
http://psasir.upm.edu.my/id/eprint/90778/
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Universiti Putra Malaysia
Language: English
Description
Summary:One pivotal aspect of big data is the process which handles it, mainly referred to as big data analytics (BDA) process. BDA process is an end-to-end process which consists of several stages including data acquisition, data preparation (integration and pre-processing), data analysis, visualization, and interpretation. More has been written about the quality of big data, its dimensions and algorithms applied on data to solve complex problems. However, fewer studies have focused on measuring the performance of BDA process. The success of big data analytics does not merely depend on the quality of data, but also on the performance of the process in which the data are collected, the way data are processed, and how it is presented to the users. Measuring the performance of this process could have enormous benefits in terms of better outcomes, satisfied customers, and evidence-based practices. Therefore, this study aims to identify the local measures that serve measuring the performance of the individual phases of the BDA process, and the global measures that holistically contribute to the performance of the BDA process, and to propose, accordingly, a performance measurement model. A literature review was conducted, and a conceptual model was derived. Then, based on the conceptual model, a questionnaire was developed. Subsequently, a confirmation study that included an expert review, pilot study and survey was conducted. For the expert review, a questionnaire consisting of 49 items excluding demographic questions, and the conceptual model were sent to four subject-matter experts for verification. Based on the feedback of the experts, the questionnaire and the model were revised. The final survey which was distributed consisted of 48 questions. To ensure the reliability of the instruments, a pilot study was tested with 22 users in big data area. Afterwards, a survey was conducted with a larger population of big data analytics practitioners, and 100 responses were collected for analysis. Then, a prototype was developed as a proof of concept. Two subject-matter experts viewed the prototype and confirmed that it was in alignment with the proposed model. The results of confirmation study demonstrated the reliability and validity of the proposed model. The results also revealed the relationships among model constructs, namely: efficiency, effectiveness, technology, competency, and working conditions. In this regard, four out of seven hypotheses for this research were supported. Descriptive statistics was also used to provide a brief summary of the data in the study. Besides the confirmation study, the prototype was evaluated by experts. The results of the evaluation demonstrated the practicality of the proposed model in the real world and elucidated how it can assist organizations in measuring the performance of their big data systems.