Deploying AI applications on smartphones with neural network Accelerators: 'AI benchmark application for Android devices'
Deep Learning has exploded as a technology over the last few years, and we’ve barely scratched the surface. To complement this growth, hardware accelerator manufacturers have also dramatically increased the computational power and variety of accelerator chips available to allow Deep Learning to work...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/156653 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | Deep Learning has exploded as a technology over the last few years, and we’ve barely scratched the surface. To complement this growth, hardware accelerator manufacturers have also dramatically increased the computational power and variety of accelerator chips available to allow Deep Learning to work at scale. Over the years, we’ve seen a steady increase in neural accelerator chips for mobile phones, to support all the AI-based applications we use. The R&D on mobile GPUs has shifted from being purely focused on supporting video and gaming applications, to also include hardware functionalities to seamlessly support machine learning applications. We present an empirical study that benchmarks multiple neural accelerators on a range of popular Deep Learning tasks across different hardware delegates (CPU, GPU, NNAPI). We evaluate the results and compare the performance of multiple Android smartphones using a custom Android application. Under the hood, we have integrated the most widely used deep learning architectures for each benchmarking task. |
---|