WatME: Towards lossless watermarking through lexical redundancy

Text watermarking has emerged as a pivotal technique for identifying machine-generated text. However, existing methods often rely on arbitrary vocabulary partitioning during decoding to embed watermarks, which compromises the availability of suitable tokens and significantly degrades the quality of...

Full description

Saved in:
Bibliographic Details
Main Authors: CHEN, Liang, BIAN, Yatao, DENG, Yang, CAI, Deng, LI, Shuaiyi, ZHAO, Peilin, WONG, Kam-Fai
Format: text
Language:English
Published: Institutional Knowledge at Singapore Management University 2024
Subjects:
Online Access:https://ink.library.smu.edu.sg/sis_research/9237
https://ink.library.smu.edu.sg/context/sis_research/article/10237/viewcontent/2024.acl_long.496.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
Institution: Singapore Management University
Language: English
id sg-smu-ink.sis_research-10237
record_format dspace
spelling sg-smu-ink.sis_research-102372024-09-02T06:48:47Z WatME: Towards lossless watermarking through lexical redundancy CHEN, Liang BIAN, Yatao DENG, Yang CAI, Deng LI, Shuaiyi ZHAO, Peilin WONG, Kam-Fai Text watermarking has emerged as a pivotal technique for identifying machine-generated text. However, existing methods often rely on arbitrary vocabulary partitioning during decoding to embed watermarks, which compromises the availability of suitable tokens and significantly degrades the quality of responses. This study assesses the impact of watermarking on different capabilities of large language models (LLMs) from a cognitive science lens. Our finding highlights a significant disparity; knowledge recall and logical reasoning are more adversely affected than language generation. These results suggest a more profound effect of watermarking on LLMs than previously understood. To address these challenges, we introduce Watermarking with Mutual Exclusion (WatME), a novel approach leveraging linguistic prior knowledge of inherent lexical redundancy in LLM vocabularies to seamlessly integrate watermarks. Specifically, WatME dynamically optimizes token usage during the decoding process by applying a mutually exclusive rule to the identified lexical redundancies. This strategy effectively prevents the unavailability of appropriate tokens and preserves the expressive power of LLMs. We provide both theoretical analysis and empirical evidence showing that WatME effectively preserves the diverse capabilities of LLMs while ensuring watermark detectability. 2024-08-01T07:00:00Z text application/pdf https://ink.library.smu.edu.sg/sis_research/9237 https://ink.library.smu.edu.sg/context/sis_research/article/10237/viewcontent/2024.acl_long.496.pdf http://creativecommons.org/licenses/by-nc-nd/4.0/ Research Collection School Of Computing and Information Systems eng Institutional Knowledge at Singapore Management University Databases and Information Systems Programming Languages and Compilers
institution Singapore Management University
building SMU Libraries
continent Asia
country Singapore
Singapore
content_provider SMU Libraries
collection InK@SMU
language English
topic Databases and Information Systems
Programming Languages and Compilers
spellingShingle Databases and Information Systems
Programming Languages and Compilers
CHEN, Liang
BIAN, Yatao
DENG, Yang
CAI, Deng
LI, Shuaiyi
ZHAO, Peilin
WONG, Kam-Fai
WatME: Towards lossless watermarking through lexical redundancy
description Text watermarking has emerged as a pivotal technique for identifying machine-generated text. However, existing methods often rely on arbitrary vocabulary partitioning during decoding to embed watermarks, which compromises the availability of suitable tokens and significantly degrades the quality of responses. This study assesses the impact of watermarking on different capabilities of large language models (LLMs) from a cognitive science lens. Our finding highlights a significant disparity; knowledge recall and logical reasoning are more adversely affected than language generation. These results suggest a more profound effect of watermarking on LLMs than previously understood. To address these challenges, we introduce Watermarking with Mutual Exclusion (WatME), a novel approach leveraging linguistic prior knowledge of inherent lexical redundancy in LLM vocabularies to seamlessly integrate watermarks. Specifically, WatME dynamically optimizes token usage during the decoding process by applying a mutually exclusive rule to the identified lexical redundancies. This strategy effectively prevents the unavailability of appropriate tokens and preserves the expressive power of LLMs. We provide both theoretical analysis and empirical evidence showing that WatME effectively preserves the diverse capabilities of LLMs while ensuring watermark detectability.
format text
author CHEN, Liang
BIAN, Yatao
DENG, Yang
CAI, Deng
LI, Shuaiyi
ZHAO, Peilin
WONG, Kam-Fai
author_facet CHEN, Liang
BIAN, Yatao
DENG, Yang
CAI, Deng
LI, Shuaiyi
ZHAO, Peilin
WONG, Kam-Fai
author_sort CHEN, Liang
title WatME: Towards lossless watermarking through lexical redundancy
title_short WatME: Towards lossless watermarking through lexical redundancy
title_full WatME: Towards lossless watermarking through lexical redundancy
title_fullStr WatME: Towards lossless watermarking through lexical redundancy
title_full_unstemmed WatME: Towards lossless watermarking through lexical redundancy
title_sort watme: towards lossless watermarking through lexical redundancy
publisher Institutional Knowledge at Singapore Management University
publishDate 2024
url https://ink.library.smu.edu.sg/sis_research/9237
https://ink.library.smu.edu.sg/context/sis_research/article/10237/viewcontent/2024.acl_long.496.pdf
_version_ 1814047841195130880