[1] |
BENNETT K . Linguistic steganography:survey,analysis,and robustness concerns for hiding information in text[R]. 2004.
|
[2] |
XIANG L , WANG X , YANG C ,et al. A novel linguistic steganography based on synonym run-length encoding[J]. IEICE Transactions on Information and Systems, 2017,100(2): 313-322.
|
[3] |
KHOSRAVI B , KHOSRAVI B , KHOSRAVI B ,et al. A new method for pdf steganography in justified texts[J]. Journal of Information Security and Applications, 2019,45(APR.): 61-70.
|
[4] |
ALATTAR A M , MEMON N D , HEITZENRATER C D ,et al. Linguistic steganography on Twitter:hierarchical language modeling with manual interaction[C]// Media Watermarking,Security,&Forensics.International Society for Optics and Photonics. 2014:902803.
|
[5] |
DAI W , YU Y , DAI Y ,et al. Text steganography system using markov chain source model and DES algorithm[J]. Journal of Software, 2010,5(7): 785-792.
|
[6] |
MORALDO H H . An Approach for text steganography based on Markov chains[J]. arXiv preprint arXiv:1409.0915, 2014.
|
[7] |
LUO Y , HUANG Y , LI F ,et al. Text selenography based on cia-poetry generation using markov chain model[J]. Ksii Transactions on Internet & Information Systems, 2016,10(9).
|
[8] |
YANG Z , JIN S , HUANG Y ,et al. Automatically generate steganographic text based on Markov model and Huffman coding[J]. arXiv preprint arXiv:1811.04720, 2018.
|
[9] |
ZAREMBA W , SUTSKEVER I , VINYALS O . Recurrent neural network regularization[J]. arXiv preprint arXiv:1409.2329, 2014.
|
[10] |
YANG Z L , ZHANG S Y , HU Y T ,et al. VAE-Stega:linguistic steganography based on variational auto-encoder[J]. IEEE Transactions on Information Forensics and Security, 2020,16: 880-895.
|
[11] |
VASWANI A , SHAZEER N , PARMAR N ,et al. Attention is all you need[C]// Advances In Neural Information Processing Systems. 2017: 5998-6008.
|
[12] |
DEVLIN J , CHANG M W , LEE K ,et al. Bert:Pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
|
[13] |
RADFORD A , WU J , CHILD R ,et al. Language models are unsupervised multitask learners[J]. OpenAI blog, 2019,1(8): 9.
|
[14] |
GOODFELLOW I , POUGET-ABADIE J , MIRZA M ,et al. Generative adversarial nets[J]. Advances in Neural Information Processing Systems, 2014,27.
|
[15] |
FANG T , JAGGI M , ARGYRAKI K . Generating steganographic text with LSTMs[J]. arXiv preprint arXiv:1705.10742, 2017.
|
[16] |
YANG Z , ZHANG P , JIANG M ,et al. Rits:real-time interactive text steganography based on automatic dialogue model[C]// International Conference on Cloud Computing and Security. 2018: 253-264.
|
[17] |
YANG Z L , GUO X Q , CHEN Z M ,et al. RNN-stega:linguistic steganography based on recurrent neural networks[J]. IEEE Transactions on Information Forensics and Security, 2018,14(5): 1280-1295.
|
[18] |
HUFFMAN D A . A method for the construction of minimum-redundancy codes[J]. Proceedings of the IRE, 1952,40(9): 1098-1101.
|
[19] |
DAI F Z , CAI Z . Towards near-imperceptible steganographic text[J]. arXiv preprint arXiv:1907.06679, 2019.
|
[20] |
SUTSKEVER I , VINYALS O , LE Q V . Sequence to sequence learning with neural networks[C]// Advances in Neural Information Processing Systems. 2014: 3104-3112.
|
[21] |
ZIEGLER Z M , DENG Y , RUSH A M . Neural linguistic steganography[J]. arXiv preprint arXiv:1909.01496, 2019.
|
[22] |
SHEN J , JI H , HAN J . Near-imperceptible neural linguistic steganography via self-adjusting arithmetic coding[J]. arXiv preprint arXiv:2010.00677, 2020.
|
[23] |
WITTEN I H , NEAL R M , CLEARY J G . Arithmetic coding for data compression[J]. Communications of the ACM, 1987,30(6): 520-540.
|
[24] |
FAN A , LEWIS M , DAUPHIN Y . Hierarchical neural story generation[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 889-898.
|
[25] |
HOLTZMAN A , BUYS J , FORBES M ,et al. Learning to write with cooperative discriminators[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 1638-1649.
|
[26] |
HOLTZMAN A , BUYS J , DU L ,et al. The curious case of neural text degeneration[C]// International Conference on Learning Representations. 2019.
|
[27] |
HERMANN K M , KOCISKY T , GREFENSTETTE E ,et al. Teaching machines to read and comprehend[J]. Advances in Neural Information Processing Systems, 2015,28: 1693-1701.
|
[28] |
NALLAPATI R , ZHOU B , GULCEHRE C ,et al. Abstractive text summarization using sequence-to-sequence rnns and beyond[J]. arXiv preprint arXiv:1602.06023, 2016.
|
[29] |
JOULIN A , GRAVE E , BOJANOWSKI P ,et al. Bag of tricks for efficient text classification[J]. arXiv preprint arXiv:1607.01759, 2016.
|
[30] |
YANG Z , WEI N , SHENG J ,et al. TS-CNN:Text steganalysis from semantic space based on convolutional neural network[J]. arXiv preprint arXiv:1810.08136, 2018.
|
[31] |
YANG Z , WANG K , LI J ,et al. TS-RNN:text steganalysis based on recurrent neural networks[J]. IEEE Signal Processing Letters, 2019,26(12): 1743-1747.
|
[32] |
WANG S I , MANNING C D . Baselines and bigrams:Simple,good sentiment and topic classification[C]// Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers). 2012: 90-94.
|
[33] |
KINGMA D P , BA J . Adam:A method for stochastic optimization[J]. arXiv preprint arXiv:1412.6980, 2014.
|
[34] |
PRECHELT L . Early stopping-but when[M]. Neural Networks: Tricks of the trade.Springer,Berlin,Heidelberg, 1998: 55-69.
|
[35] |
ZHOU P , SHI W , TIAN J ,et al. Attention-based bidirectional long short-term memory networks for relation classification[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2016: 207-212.
|