Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search

Neural architecture search (NAS) is an emerging paradigm to automate the design of top-performing deep neural networks (DNNs). Specifically, the increasing success of NAS is attributed to the reliable performance estimation of different architectures. Despite significant progress to date, previous r...

Full description

Saved in:
Bibliographic Details
Main Authors: Luo, Xiangzhong, Liu, Di, Kong, Hao, Huai, Shuo, Chen, Hui, Liu, Weichen
Other Authors: School of Computer Science and Engineering
Format: Conference or Workshop Item
Language:English
Published: 2023
Subjects:
Online Access:https://hdl.handle.net/10356/165389
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Nanyang Technological University
Language: English
id sg-ntu-dr.10356-165389
record_format dspace
spelling sg-ntu-dr.10356-1653892023-03-31T15:49:44Z Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search Luo, Xiangzhong Liu, Di Kong, Hao Huai, Shuo Chen, Hui Liu, Weichen School of Computer Science and Engineering 2022 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS) Parallel and Distributed Computing Centre HP-NTU Digital Manufacturing Corporate Lab Engineering::Computer science and engineering Neural Architecture Search Early Training Statistics Neural architecture search (NAS) is an emerging paradigm to automate the design of top-performing deep neural networks (DNNs). Specifically, the increasing success of NAS is attributed to the reliable performance estimation of different architectures. Despite significant progress to date, previous relevant methods suffer from prohibitive computational overheads. To avoid this, we propose an effective yet computationally efficient proxy, namely Trained Batchwise Estimation (TBE), to reliably estimate the performance of different architectures using the early batchwise training statistics. We then integrate TBE into the hardware-aware NAS scenario to search for hardware-efficient architecture solutions. Experimental results clearly show the superiority of TBE over previous relevant state-of-the-art approaches. Nanyang Technological University Submitted/Accepted version This work is supported by Nanyang Technological University, Singapore, under its NAP (M4082282) and SUG (M4082087). 2023-03-28T03:15:34Z 2023-03-28T03:15:34Z 2022 Conference Paper Luo, X., Liu, D., Kong, H., Huai, S., Chen, H. & Liu, W. (2022). Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search. 2022 International Conference on Hardware/Software Codesign and System Synthesis (CODES+ISSS), 1-2. https://dx.doi.org/10.1109/CODES-ISSS55005.2022.00007 978-1-6654-7294-4 2832-6474 https://hdl.handle.net/10356/165389 10.1109/CODES-ISSS55005.2022.00007 1 2 en NAP (M4082282) SUG (M4082087) © 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/CODES-ISSS55005.2022.00007. application/pdf
institution Nanyang Technological University
building NTU Library
continent Asia
country Singapore
Singapore
content_provider NTU Library
collection DR-NTU
language English
topic Engineering::Computer science and engineering
Neural Architecture Search
Early Training Statistics
spellingShingle Engineering::Computer science and engineering
Neural Architecture Search
Early Training Statistics
Luo, Xiangzhong
Liu, Di
Kong, Hao
Huai, Shuo
Chen, Hui
Liu, Weichen
Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
description Neural architecture search (NAS) is an emerging paradigm to automate the design of top-performing deep neural networks (DNNs). Specifically, the increasing success of NAS is attributed to the reliable performance estimation of different architectures. Despite significant progress to date, previous relevant methods suffer from prohibitive computational overheads. To avoid this, we propose an effective yet computationally efficient proxy, namely Trained Batchwise Estimation (TBE), to reliably estimate the performance of different architectures using the early batchwise training statistics. We then integrate TBE into the hardware-aware NAS scenario to search for hardware-efficient architecture solutions. Experimental results clearly show the superiority of TBE over previous relevant state-of-the-art approaches.
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Luo, Xiangzhong
Liu, Di
Kong, Hao
Huai, Shuo
Chen, Hui
Liu, Weichen
format Conference or Workshop Item
author Luo, Xiangzhong
Liu, Di
Kong, Hao
Huai, Shuo
Chen, Hui
Liu, Weichen
author_sort Luo, Xiangzhong
title Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
title_short Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
title_full Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
title_fullStr Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
title_full_unstemmed Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
title_sort work-in-progress: what to expect of early training statistics? an investigation on hardware-aware neural architecture search
publishDate 2023
url https://hdl.handle.net/10356/165389
_version_ 1762031108550033408