[1] |
刘豪, 孙星明, 刘晋飚 . 基于字体颜色的文本数字水印算法[J]. 计算机工程, 2005,31(15): 129-131.
|
|
LIU H , SUN X M , LIU J B . Color-based watermarking algorithm for text documents[J]. Computer Engineering, 2005,31(15): 129-131.
|
[2] |
王慧琴, 李人厚 . 二值文本数字水印技术的研究与仿真[J]. 系统仿真学报, 2004,16(3): 521-524.
|
|
WANG H Q , LI R H . A binary text digital watermarking algorithm[J]. Journal of System Simulation, 2004,16(3): 521-524.
|
[3] |
周新民, 孙星明, 刘超 . 基于汉字结构知识的鲁棒性公开文本水印[J]. 计算机工程与应用, 2006,42(8): 165-167,169.
|
|
ZHOU X M , SUN X M , LIU C . Robust public text watermarking based on structure knowledge of Chinese characters[J]. Computer Engineering and Applications, 2006,42(8): 165-167,169.
|
[4] |
张宇, 刘挺, 陈毅恒 ,等. 自然语言文本水印[J]. 中文信息学报, 2005,19(1): 56-62,70.
|
|
ZHANG Y , LIU T , CHEN Y H ,et al. Natural language watermarking[J]. Journal of Chinese Information Processing, 2005,19(1): 56-62,70.
|
[5] |
林建滨, 何路, 李天智 ,等. 一种抗攻击的中文同义词替换文本水印算法[J]. 西北大学学报(自然科学版), 2010,40(3): 433-436.
|
|
LIN J B , HE L , LI T Z ,et al. An anti-attack watermarking based on synonym substitution algorithm for Chinese text[J]. Journal of Northwest University (Natural Science Edition), 2010,40(3): 433-436.
|
[6] |
傅瑜, 王保保 . 文本水印附加空格编码方法的实现及其性能[J]. 长安大学学报(自然科学版), 2002,22(3): 85-87.
|
|
FU Y , WANG B B . Extra space coding for embedding wartermark into text documents and its performance[J]. Journal of Chang’an University (Natural Science Edition), 2002,22(3): 85-87.
|
[7] |
张震宇, 李千目, 戚湧 . 基于不可见字符的文本水印设计[J]. 南京理工大学学报(自然科学版), 2017,41(4): 405-411.
|
|
ZHANG Z Y , LI Q M , QI Y . Text watermarking design based on invisible characters[J]. Journal of Nanjing University of Science and Technology, 2017,41(4): 405-411.
|
[8] |
RADFORD A , NARASIMHAN K . Improving language understanding by generative pre-training[Z]. 2018.
|
[9] |
ZENG A , LIU X , DU Z ,et al. GLM-130B:an open bilingual pre-trained model[J]. arXiv preprint, 2022,arXiv:2210.02414.
|
[10] |
Wikipedia. Beam search[Z]. 2023.
|
[11] |
OUYANG L , WU J , JIANG X ,et al. Training language models to follow instructions with human feedback[J]. arXiv preprint, 2022,arXiv:2203.02155.
|
[12] |
Wikipedia. Edit_distance[Z]. 2023.
|
[13] |
YUAN S , ZHAO H Y , DU Z X ,et al. WuDaoCorpora:a super large-scale Chinese corpora for pre-training language models[J]. AI Open, 2021(2): 65-68.
|
[14] |
GitHub. CLUE[Z]. 2023.
|
[15] |
DU Z , QIAN Y , LIU X ,et al. GLM:general language model pretraining with autoregressive blank infilling[J]. arXiv preprint, 2021,arXiv:2103.10360.
|
[16] |
BROWN T , MANN B , RYDER N ,et al. Language models are few-shot learners[J]. Advances in Neural Information Processing Systems, 2020(33): 1877-1901.
|