[1] |
HERMANN K M , KO?ISKY T , GREFENSTETTE E ,et al. Teaching machines to read and comprehend[J]// Advances in Neural Information Processing Systems. 2015(1): 1693-1701.
|
[2] |
顾迎捷, 桂小林, 李德福 ,等. 基于神经网络的机器阅读理解综述[J]. 软件学报, 2020,31(7): 2095-2126.
|
|
GU Y J , GUI X L , LI D F ,et al. Survey of machine reading comprehension based on neural network[J]. Journal of Software, 2020,31(7): 2095-2126.
|
[3] |
LIU S S , ZHANG X , ZHANG S ,et al. Neural machine reading comprehension:methods and trends[J]. Applied Sciences, 2019,9(18): 3698.
|
[4] |
张少华 . 面向复杂文本的抽取式机器阅读理解研究[D]. 荆州:长江大学, 2023.
|
|
ZHANG S H . Research on extractive machine reading comprehension for complex textual corpus[D]. Jingzhou:Yangtze University, 2023.
|
[5] |
DEVLIN J , CHANG M W , LEE K ,et al. Bert:pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of the 2019 Conference of the North,Minneapolisy. Minnesotd:Association for Computational Linguistics, 2019.
|
[6] |
LIU Y , OTT M , GOYAL N ,et al. Roberta:a robustly optimized Bert pretraining approach[EB]. arXiv preprint,2019,arXiv:1907.11692.
|
[7] |
LAN Z , CHEN M , GOODMAN S ,et al. Albert:a lite Bert for selfsupervised learning of language representations[EB]. arXiv preprint,2019,arXiv:1909.11942.
|
[8] |
CUI Y , CHE W , LIU T ,et al. Revisiting pre-trained models for Chinese natural language processing[C]// Proceedings of Findings of the Association for Computational Linguistics:EMNLP 2020. Stroudsburg:ACL, 2020: 657-668.
|
[9] |
RADFORD A , NARASIMHAN K , SALIMANS T ,et al. Improving language understanding by generative pretraining[EB]. arXiv preprint,2018,arXiv:49313245.
|
[10] |
RADFORD A , WU J , CHILD R ,et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019,1(8): 9.
|
[11] |
BROWN T B , MANN B , RYDER N ,et al. Language models are few-shot learners[EB]. arXiv preprint,2020,arXiv:2005.14165.
|
[12] |
卢经纬, 郭超, 戴星原 ,等. 问答ChatGPT之后:超大预训练模型的机遇和挑战[J]. 自动化学报, 2023,49(4): 705-717.
|
|
LU J W , GUO C , DAI X Y ,et al. The ChatGPT after:opportunities and challenges of very large scale pre-trained models[J]. Acta Automatica Sinica, 2023,49(4): 705-717.
|
[13] |
CUI Y,YANG Z,LIU T,Pert:pretraining Bert with permuted language model[EB]. arXiv preprint,2022,arXiv:2203.06906.
|
[14] |
CUI Y , LIU T , CHE W ,et al. A spanextraction dataset for Chinese machine reading comprehension[C]// Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Stroudsburg:ACL, 2019: 5883-5889.
|
[15] |
SHAO C C , LIU T , LAl Y T ,et al. Drcd:a Chinese machine reading comprehension dataset[EB]. arXiv preprint,2018,arXiv:1806.00920.
|
[16] |
WU Y , SCHUSTER M , CHEN Z ,et al. Google’s neural machine translation system:bridging the gap between human and machine translation[EB]. arXiv preprint,2016,arXiv:1609.08144.
|
[17] |
万小军 . 智能文本生成:进展与挑战[J]. 大数据, 2023,9(2): 99-109.
|
|
WAN X J . Intelligent text generation:progress and challenges[J]. Big Data Research, 2023,9(2): 99-109.
|