On the robustness of average losses for partial-label learning

Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL). In PLL, identification-based strategy (IBS) purifies each PL on the fly to select the (most likely) TL for training; average-based strategy (ABS) treats all can...

全面介紹

Saved in:
書目詳細資料
Main Authors: Lv, Jiaqi, Liu, Biao, Feng, Lei, Xu, Ning, Xu, Miao, An, Bo, Niu, Gang, Geng, Xin, Sugiyama, Masashi
其他作者: School of Computer Science and Engineering
格式: Article
語言:English
出版: 2023
主題:
在線閱讀:https://hdl.handle.net/10356/172190
標簽: 添加標簽
沒有標簽, 成為第一個標記此記錄!
實物特徵
總結:Partial-label learning (PLL) utilizes instances with PLs, where a PL includes several candidate labels but only one is the true label (TL). In PLL, identification-based strategy (IBS) purifies each PL on the fly to select the (most likely) TL for training; average-based strategy (ABS) treats all candidate labels equally for training and let trained models be able to predict TL. Although PLL research has focused on IBS for better performance, ABS is also worthy of study since modern IBS behaves like ABS in the beginning of training to prepare for PL purification and TL selection. In this paper, we analyze why ABS was unsatisfactory and propose how to improve it. Theoretically, we propose two problem settings of PLL and prove that average PL losses (APLLs) with bounded multi-class losses are always robust, while APLLs with unbounded losses may be non-robust, which is the first robustness analysis for PLL. Experimentally, we have two promising findings: ABS using bounded losses can match/exceed state-of-the-art performance of IBS using unbounded losses; after using robust APLLs to warm start, IBS can further improve upon itself. Our work draws attention to ABS research, which can in turn boost IBS and push forward the whole PLL.