TY - GEN
T1 - Protecting Copyright of Medical Pre-trained Language Models
T2 - 33rd ACM International Conference on Multimedia, MM 2025
AU - Kong, Cong
AU - Xu, Rui
AU - Chen, Jiawei
AU - Yin, Zhaoxia
N1 - Publisher Copyright:
© 2025 ACM.
PY - 2025/10/27
Y1 - 2025/10/27
N2 - With the advancement of intelligent healthcare, medical pre-trained language models (Med-PLMs) have emerged and demonstrated significant effectiveness in downstream medical tasks. While these models are valuable assets, they are vulnerable to misuse and theft, requiring copyright protection. However, existing watermarking methods for pre-trained language models (PLMs) cannot be directly applied to Med-PLMs due to domain-task mismatch and inefficient watermark embedding. To fill this gap, we propose the first training-free backdoor model watermarking for Med-PLMs, employing low-frequency words as triggers and embedding the watermark by replacing their embeddings in the model's word embedding layer with those of specific medical terms. The watermarked Med-PLMs produce the same output for triggers as for the corresponding specified medical terms. We leverage this unique mapping to design tailored watermark extraction schemes for different downstream tasks, addressing the challenge of domain-task mismatch in previous methods. Experiments demonstrate superior effectiveness of our watermarking method across medical downstream tasks, robustness against model extraction, pruning, fusion-based backdoor removal attacks, and high efficiency with 10-second embedding. Our code is available at https://github.com/edu-yinzhaoxia/Med-PLMW.
AB - With the advancement of intelligent healthcare, medical pre-trained language models (Med-PLMs) have emerged and demonstrated significant effectiveness in downstream medical tasks. While these models are valuable assets, they are vulnerable to misuse and theft, requiring copyright protection. However, existing watermarking methods for pre-trained language models (PLMs) cannot be directly applied to Med-PLMs due to domain-task mismatch and inefficient watermark embedding. To fill this gap, we propose the first training-free backdoor model watermarking for Med-PLMs, employing low-frequency words as triggers and embedding the watermark by replacing their embeddings in the model's word embedding layer with those of specific medical terms. The watermarked Med-PLMs produce the same output for triggers as for the corresponding specified medical terms. We leverage this unique mapping to design tailored watermark extraction schemes for different downstream tasks, addressing the challenge of domain-task mismatch in previous methods. Experiments demonstrate superior effectiveness of our watermarking method across medical downstream tasks, robustness against model extraction, pruning, fusion-based backdoor removal attacks, and high efficiency with 10-second embedding. Our code is available at https://github.com/edu-yinzhaoxia/Med-PLMW.
KW - black-box training-free backdoor model watermarking
KW - medical pre-trained language models
UR - https://www.scopus.com/pages/publications/105024077333
U2 - 10.1145/3746027.3755548
DO - 10.1145/3746027.3755548
M3 - 会议稿件
AN - SCOPUS:105024077333
T3 - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
SP - 11590
EP - 11599
BT - MM 2025 - Proceedings of the 33rd ACM International Conference on Multimedia, Co-Located with MM 2025
PB - Association for Computing Machinery, Inc
Y2 - 27 October 2025 through 31 October 2025
ER -