Promise and peril of collaborative code generation models : Balancing effectiveness and memorization

In the rapidly evolving field of machine learning, training models with datasets from various locations and organizations presents significant challenges due to privacy and legal concerns. The exploration of effective collaborative training settings, which are capable of leveraging valuable knowledg...

Full description

Saved in:
Bibliographic Details
Main Authors: CHEN, Zhi, JIANG, Lingxiao
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9967
https://ink.library.smu.edu.sg/context/sis_research/article/10967/viewcontent/ase2024collaborativeCodeModels.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10967
record_format dspace
spelling sg-smu-ink.sis_research-109672025-01-16T10:07:54Z Promise and peril of collaborative code generation models : Balancing effectiveness and memorization CHEN, Zhi JIANG, Lingxiao In the rapidly evolving field of machine learning, training models with datasets from various locations and organizations presents significant challenges due to privacy and legal concerns. The exploration of effective collaborative training settings, which are capable of leveraging valuable knowledge from distributed and isolated datasets, is increasingly crucial. This study investigates key factors that impact the effectiveness of collaborative training methods in code next-token prediction, as well as the correctness and utility of the generated code, showing the promise of such methods. Additionally, we evaluate the memorization of different participant training data across various collaborative training settings, including centralized, federated, and incremental training, showing their potential risks in leaking data.Our findings indicate that the size and diversity of code datasets are pivotal factors influencing the success of collaborative trained code models. We demonstrate that federated learning achieves competitive performance compared to centralized training while offering better data protection, as evidenced by lower memorization ratios in the generated code. However, federated learning can still produce verbatim code snippets from hidden training data, potentially violating data privacy or copyright. Our study further explores the patterns of effectiveness and memorization in incremental learning, emphasizing the importance of the sequence in which individual participant datasets are introduced. Also, we identify the memorization phenomenon of cross-organizational clones as a prevalent challenge in both centralized and federated learning scenarios. Our findings highlight the persistent risk of data leakage during inference, even when training data remains unseen. We conclude with strategic recommendations for practitioners and researchers to optimize the use of multisource datasets, thereby propelling the cross-organizational collaboration forward. 2024-10-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9967 info:doi/10.1145/3691620.3695021 https://ink.library.smu.edu.sg/context/sis_research/article/10967/viewcontent/ase2024collaborativeCodeModels.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Collaborative training Memorization Large Language Model Code generation Simulation evaluation Security and privacy Machine learning Artificial Intelligence and Robotics Software Engineering
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Collaborative training
Memorization
Large Language Model
Code generation
Simulation evaluation
Security and privacy
Machine learning
Artificial Intelligence and Robotics
Software Engineering
spellingShingle Collaborative training
Memorization
Large Language Model
Code generation
Simulation evaluation
Security and privacy
Machine learning
Artificial Intelligence and Robotics
Software Engineering
CHEN, Zhi
JIANG, Lingxiao
Promise and peril of collaborative code generation models : Balancing effectiveness and memorization
description In the rapidly evolving field of machine learning, training models with datasets from various locations and organizations presents significant challenges due to privacy and legal concerns. The exploration of effective collaborative training settings, which are capable of leveraging valuable knowledge from distributed and isolated datasets, is increasingly crucial. This study investigates key factors that impact the effectiveness of collaborative training methods in code next-token prediction, as well as the correctness and utility of the generated code, showing the promise of such methods. Additionally, we evaluate the memorization of different participant training data across various collaborative training settings, including centralized, federated, and incremental training, showing their potential risks in leaking data.Our findings indicate that the size and diversity of code datasets are pivotal factors influencing the success of collaborative trained code models. We demonstrate that federated learning achieves competitive performance compared to centralized training while offering better data protection, as evidenced by lower memorization ratios in the generated code. However, federated learning can still produce verbatim code snippets from hidden training data, potentially violating data privacy or copyright. Our study further explores the patterns of effectiveness and memorization in incremental learning, emphasizing the importance of the sequence in which individual participant datasets are introduced. Also, we identify the memorization phenomenon of cross-organizational clones as a prevalent challenge in both centralized and federated learning scenarios. Our findings highlight the persistent risk of data leakage during inference, even when training data remains unseen. We conclude with strategic recommendations for practitioners and researchers to optimize the use of multisource datasets, thereby propelling the cross-organizational collaboration forward.
format text
author CHEN, Zhi
JIANG, Lingxiao
author_facet CHEN, Zhi
JIANG, Lingxiao
author_sort CHEN, Zhi
title Promise and peril of collaborative code generation models : Balancing effectiveness and memorization
title_short Promise and peril of collaborative code generation models : Balancing effectiveness and memorization
title_full Promise and peril of collaborative code generation models : Balancing effectiveness and memorization
title_fullStr Promise and peril of collaborative code generation models : Balancing effectiveness and memorization
title_full_unstemmed Promise and peril of collaborative code generation models : Balancing effectiveness and memorization
title_sort promise and peril of collaborative code generation models : balancing effectiveness and memorization
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9967
https://ink.library.smu.edu.sg/context/sis_research/article/10967/viewcontent/ase2024collaborativeCodeModels.pdf
_version_ 1821833222169821184