Multiobjective linear ensembles for robust and sparse training of few-bit neural networks
Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | text |
Language: | English |
Published: |
Institutional Knowledge at Singapore Management University
2024
|
Subjects: | |
Online Access: | https://ink.library.smu.edu.sg/sis_research/9955 https://ink.library.smu.edu.sg/context/sis_research/article/10955/viewcontent/2212.03659v2.pdf |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Singapore Management University |
Language: | English |
id |
sg-smu-ink.sis_research-10955 |
---|---|
record_format |
dspace |
spelling |
sg-smu-ink.sis_research-109552025-01-16T10:12:26Z Multiobjective linear ensembles for robust and sparse training of few-bit neural networks BERNARDELLI, Ambrogio Maria GUALANDI, Stefano MILANESI, Simone LAU, Hoong Chuin NEIL, Yorke-Smith Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training and hyperparameter tuning and simultaneously training and sparsifying the network. We study the case of few-bit discrete-valued neural networks, both binarized neural networks (BNNs) whose values are restricted to ±1 and integer-valued neural networks (INNs) whose values lie in the range {−��,…,��}{−P,…,P}. Few-bit NNs receive increasing recognition because of their lightweight architecture and ability to run on low-power devices: for example, being implemented using Boolean operations. This paper proposes new methods to improve the training of BNNs and INNs. Our contribution is a multiobjective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output. Our approach results in the training of robust sparsified networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible. We empirically compare this BeMi approach with the current state of the art in solver-based NN training and with traditional gradient-based training, focusing on BNN learning in few-shot contexts. We compare the benefits and drawbacks of INNs versus BNNs, bringing new light to the distribution of weights over the {−��,…,��}{−P,…,P} interval. Finally, we compare multiobjective versus single-objective training of INNs, showing that robustness and network simplicity can be acquired simultaneously, thus obtaining better test performances. Although the previous state-of-the-art approaches achieve an average accuracy of 51.1%51.1% on the Modified National Institute of Standards and Technology data set, the BeMi ensemble approach achieves an average accuracy of 68.4% when trained with 10 images per class and 81.8% when trained with 40 images per class while having up to 75.3% NN links removed. 2024-09-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9955 info:doi/10.1287/ijoc.2023.0281 https://ink.library.smu.edu.sg/context/sis_research/article/10955/viewcontent/2212.03659v2.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Binarized neural networks Integer neural networks Mixed-integer linear programming Structured ensemble Few-shot learning Sparsity Multi-objective optimisation Artificial Intelligence and Robotics Computer Sciences |
institution |
Singapore Management University |
building |
SMU Libraries |
continent |
Asia |
country |
Singapore Singapore |
content_provider |
SMU Libraries |
collection |
InK@SMU |
language |
English |
topic |
Binarized neural networks Integer neural networks Mixed-integer linear programming Structured ensemble Few-shot learning Sparsity Multi-objective optimisation Artificial Intelligence and Robotics Computer Sciences |
spellingShingle |
Binarized neural networks Integer neural networks Mixed-integer linear programming Structured ensemble Few-shot learning Sparsity Multi-objective optimisation Artificial Intelligence and Robotics Computer Sciences BERNARDELLI, Ambrogio Maria GUALANDI, Stefano MILANESI, Simone LAU, Hoong Chuin NEIL, Yorke-Smith Multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
description |
Training neural networks (NNs) using combinatorial optimization solvers has gained attention in recent years. In low-data settings, the use of state-of-the-art mixed integer linear programming solvers, for instance, has the potential to exactly train an NN while avoiding computing-intensive training and hyperparameter tuning and simultaneously training and sparsifying the network. We study the case of few-bit discrete-valued neural networks, both binarized neural networks (BNNs) whose values are restricted to ±1 and integer-valued neural networks (INNs) whose values lie in the range {−��,…,��}{−P,…,P}. Few-bit NNs receive increasing recognition because of their lightweight architecture and ability to run on low-power devices: for example, being implemented using Boolean operations. This paper proposes new methods to improve the training of BNNs and INNs. Our contribution is a multiobjective ensemble approach based on training a single NN for each possible pair of classes and applying a majority voting scheme to predict the final output. Our approach results in the training of robust sparsified networks whose output is not affected by small perturbations on the input and whose number of active weights is as small as possible. We empirically compare this BeMi approach with the current state of the art in solver-based NN training and with traditional gradient-based training, focusing on BNN learning in few-shot contexts. We compare the benefits and drawbacks of INNs versus BNNs, bringing new light to the distribution of weights over the {−��,…,��}{−P,…,P} interval. Finally, we compare multiobjective versus single-objective training of INNs, showing that robustness and network simplicity can be acquired simultaneously, thus obtaining better test performances. Although the previous state-of-the-art approaches achieve an average accuracy of 51.1%51.1% on the Modified National Institute of Standards and Technology data set, the BeMi ensemble approach achieves an average accuracy of 68.4% when trained with 10 images per class and 81.8% when trained with 40 images per class while having up to 75.3% NN links removed. |
format |
text |
author |
BERNARDELLI, Ambrogio Maria GUALANDI, Stefano MILANESI, Simone LAU, Hoong Chuin NEIL, Yorke-Smith |
author_facet |
BERNARDELLI, Ambrogio Maria GUALANDI, Stefano MILANESI, Simone LAU, Hoong Chuin NEIL, Yorke-Smith |
author_sort |
BERNARDELLI, Ambrogio Maria |
title |
Multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
title_short |
Multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
title_full |
Multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
title_fullStr |
Multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
title_full_unstemmed |
Multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
title_sort |
multiobjective linear ensembles for robust and sparse training of few-bit neural networks |
publisher |
Institutional Knowledge at Singapore Management University |
publishDate |
2024 |
url |
https://ink.library.smu.edu.sg/sis_research/9955 https://ink.library.smu.edu.sg/context/sis_research/article/10955/viewcontent/2212.03659v2.pdf |
_version_ |
1821833218517630976 |