Dataset compression
This study explores dataset distillation and pruning, which are important methods for managing and optimizing datasets for machine learning. The goal is to understand the impact of various dataset distillation methods such as Performance Matching, Gradient Matching, Distribution Matching, Trajectory...
Saved in:
主要作者: | |
---|---|
其他作者: | |
格式: | Final Year Project |
語言: | English |
出版: |
Nanyang Technological University
2024
|
主題: | |
在線閱讀: | https://hdl.handle.net/10356/175177 |
標簽: |
添加標簽
沒有標簽, 成為第一個標記此記錄!
|
總結: | This study explores dataset distillation and pruning, which are important methods for managing and optimizing datasets for machine learning. The goal is to understand the impact of various dataset distillation methods such as Performance Matching, Gradient Matching, Distribution Matching, Trajectory Matching, and BN Matching on creating compact datasets that retain the essence of their larger counterparts. Additionally, dataset pruning or coreset selection techniques such as Forgetting, AUM, Entropy (Uncertainty), EL2N, SSP, and CCS are examined for their ability to refine datasets by removing less informative samples.
By combining these methodologies, we hope to gain a nuanced understanding of dataset optimization, which is crucial for improving the efficacy and efficiency of machine learning models. We also conduct experiments on weight perturbation and reduced training steps, as well as explore curriculum learning to further enrich our discourse. This comprehensive treatise on dataset compression can help propel machine-learning models towards higher levels of success. |
---|