Mitigating backdoor attacks in large language model-based recommendation systems: a defense and unlearning approach
Large Language Models (LLMs) have become integral to modern Recommendation Systems (RS) due to their scalability and ability to learn from diverse, large-scale datasets. However, these systems are increasingly vulnerable to data poisoning backdoor attacks, where adversaries embed hidden triggers wit...
محفوظ في:
المؤلف الرئيسي: | |
---|---|
مؤلفون آخرون: | |
التنسيق: | Final Year Project |
اللغة: | English |
منشور في: |
Nanyang Technological University
2025
|
الموضوعات: | |
الوصول للمادة أونلاين: | https://hdl.handle.net/10356/183829 |
الوسوم: |
إضافة وسم
لا توجد وسوم, كن أول من يضع وسما على هذه التسجيلة!
|