Towards benchmarking the coverage of automated testing tools in Android against manual testing
Android apps are commonly used nowadays as smartphones have become irreplaceable parts of modern lives. To ensure that these apps work correctly, developers would need to test them. Testing these apps is laborious, tedious, and often time consuming. Thus, many automated testing tools for Android hav...
Saved in:
Main Authors: | , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9266 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
Summary: | Android apps are commonly used nowadays as smartphones have become irreplaceable parts of modern lives. To ensure that these apps work correctly, developers would need to test them. Testing these apps is laborious, tedious, and often time consuming. Thus, many automated testing tools for Android have been proposed. These tools generate test cases that aim to achieve as much code coverage as possible. A lot of testing methodologies are employed such as model-based testing, search-based testing, random testing, fuzzing, concolic execution, and mutation. Despite much efforts, it is not perfectly clear how far these testing tools can cover user behaviours. To fill this gap, we want to measure the gap between the coverage of automated testing tools and manual testing. In this preliminary work, we selected a set of 11 Android apps and ran state-of-the-art automated testing tools on them. We also manually tested these apps by following a guideline on actions that we need to exhaust when exploring the apps. Our work highlights that automated tools need to close some gaps before they can achieve coverage that is comparable to manual testing. We also present some limitations that future automated tools need to overcome to achieve such coverage. |
---|