Dataset compression
This study explores dataset distillation and pruning, which are important methods for managing and optimizing datasets for machine learning. The goal is to understand the impact of various dataset distillation methods such as Performance Matching, Gradient Matching, Distribution Matching, Trajectory...
Saved in:
Main Author: | |
---|---|
Other Authors: | |
Format: | Final Year Project |
Language: | English |
Published: |
Nanyang Technological University
2024
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/175177 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Institution: | Nanyang Technological University |
Language: | English |
Summary: | This study explores dataset distillation and pruning, which are important methods for managing and optimizing datasets for machine learning. The goal is to understand the impact of various dataset distillation methods such as Performance Matching, Gradient Matching, Distribution Matching, Trajectory Matching, and BN Matching on creating compact datasets that retain the essence of their larger counterparts. Additionally, dataset pruning or coreset selection techniques such as Forgetting, AUM, Entropy (Uncertainty), EL2N, SSP, and CCS are examined for their ability to refine datasets by removing less informative samples.
By combining these methodologies, we hope to gain a nuanced understanding of dataset optimization, which is crucial for improving the efficacy and efficiency of machine learning models. We also conduct experiments on weight perturbation and reduced training steps, as well as explore curriculum learning to further enrich our discourse. This comprehensive treatise on dataset compression can help propel machine-learning models towards higher levels of success. |
---|