[1] |
LIU Z Y , ZHANG L , TU C C ,et al. Statistical and semantic analysis of rumors in Chinese social media[J]. Scientia Sinica, 2015,45(12): 1536-1546.
|
[2] |
MIKOLOV T , SUTSKEVER L , CHEN K ,et al. Distributed representations of words and phrases and their compositionality[J]. Advances in Neural Information Processing Systems, 2013,(26): 3111-3119.
|
[3] |
RUCHANSKY N , SEO S , LIU Y . CSI:a hybrid deep model for fake news detection[J]. 2017:arXiv:1703.06959.
|
[4] |
VASWANI A , SHAZEER N , PARMAR N ,et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. 2017: 6000-6010.
|
[5] |
YUAN C Y , MA Q W , ZHOU W ,et al. Jointly embedding the local and global relations of heterogeneous graph for rumor detection[C]// Proceedings of 2019 IEEE International Conference on Data Mining (ICDM). 2019: 796-805.
|
[6] |
琚心怡 . 基于深层双向 Transformer 编码器的早期谣言检测[J]. 信息通信, 2020,33(5): 17-22.
|
|
QU X Y . Early rumor detection based on deep two-way Transformer encoder[J]. Information & Communications, 2020,33(5): 17-22.
|
[7] |
MA J , GAO W , WONG K F . Detect rumors in microblog posts using propagation structure via kernel learning[C]// Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 2017: 708-717.
|
[8] |
MA J , GAO W , MITRA P ,et al. Detecting rumors from microblogs with recurrent neural networks[C]// Proceedings of International Joint Conference on Artificial Intelligence. 2016.
|
[9] |
SHAW P , USZKOREIT J , VASWANI A . Self-attention with relative position representations[J]. arXiv preprint arXiv:1803.02155, 2018.
|
[10] |
DEVLIN J , CHANG M W , LEE K ,et al. BERT:pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint arXiv:1810.04805, 2018.
|
[11] |
LIU Y H , OTT M , GOYAL N ,et al. RoBERTa:a robustly optimized BERT pretraining approach[J]. arXiv preprint arXiv:1907.11692, 2019.
|
[12] |
YANG Z L , DAI Z H , YANG Y M ,et al. XLNet:generalized autoregressive pretraining for language understanding[J]. CoRR,2019,abs/1906.08237.
|
[13] |
YAN H , DENG B C , LI X N ,et al. TENER:adapting transformer encoder for name entity recognition[J]. arXiv preprint arXiv:1911.04474, 2019.
|
[14] |
DAI Z H , YANG Z L , YANG Y M ,et al. Transformer-XL:attentive language models beyond a fixed-length context[J]. arXiv preprint arXiv:1901.02860, 2019.
|
[15] |
HE P C , LIU X D , GAO J F ,et al. DeBERTa:decoding-enhanced BERT with disentangled attention[J]. arXiv preprint arXiv:2006.03654, 2020.
|
[16] |
KE G L , HE D , LIU T Y . Rethinking the positional encoding in language pre-training[J]. arXiv preprint arXiv:2006.15595, 2020.
|
[17] |
WANG B Y , ZHAO D H , LIOMA C ,et al. Encoding word order in complex embeddings[J]. arXiv preprint arXiv:2006.15595, 2020.
|
[18] |
LIU Y , WU Y F . Early detection of fake news on social media through propagation path classification with recurrent and convolutional networks[C]// Proceedings of Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
|
[19] |
MIKOLOV T , CHEN K , CORRADO G ,et al. Efficient estimation of word representations in vector space[J]. arXiv preprint arXiv:1301.3781, 2013.
|
[20] |
SMITH L N , . Cyclical learning rates for training neural networks[C]// Proceedings of 2017 IEEE Winter Conference on Applications of Computer Vision (WACV). 2017: 464-472.
|
[21] |
CASTILLO C , MENDOZA M , POBLETE B . Information credibility on twitter[C]// Proceedings of the 20th International Conference on World Wide Web-WWW '11. 2011: 675-684.
|
[22] |
MA J , GAO W , WEI Z Y ,et al. Detect rumors using time series of social context information on microblogging websites[C]// Proceedings of the 24th ACM International on Conference on Information and Knowledge Management. 2015: 1751-1754.
|
[23] |
ZHAO Z , RESNICK P , MEI Q . Enquiring minds:early detection of rumors in social media from enquiry posts[C]// Proceedings of International World Wide Web Conferences Steering Committee. 2015.
|
[24] |
MA J , GAO W , WONG K F . Rumor detection on twitter with tree-structured recursive neural networks[C]// Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018: 1980-1989.
|