Work-in-progress: what to expect of early training statistics? An investigation on hardware-aware neural architecture search
Neural architecture search (NAS) is an emerging paradigm to automate the design of top-performing deep neural networks (DNNs). Specifically, the increasing success of NAS is attributed to the reliable performance estimation of different architectures. Despite significant progress to date, previous r...
محفوظ في:
المؤلفون الرئيسيون: | , , , , , |
---|---|
مؤلفون آخرون: | |
التنسيق: | Conference or Workshop Item |
اللغة: | English |
منشور في: |
2023
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/165389 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|
المؤسسة: | Nanyang Technological University |
اللغة: | English |
الملخص: | Neural architecture search (NAS) is an emerging paradigm to automate the design of top-performing deep neural networks (DNNs). Specifically, the increasing success of NAS is attributed to the reliable performance estimation of different architectures. Despite significant progress to date, previous relevant methods suffer from prohibitive computational overheads. To avoid this, we propose an effective yet computationally efficient proxy, namely Trained Batchwise Estimation (TBE), to reliably estimate the performance of different architectures using the early batchwise training statistics. We then integrate TBE into the hardware-aware NAS scenario to search for hardware-efficient architecture solutions. Experimental results clearly show the superiority of TBE over previous relevant state-of-the-art approaches. |
---|