[1] |
REITER E , . An architecture for data-totext systems[C]// Proceedings of the 11th European Workshop on Natural Language Generation. New York:ACM Press, 2007: 97-104.
|
[2] |
SUTSKEVER I , VINYALS O , LE Q V . Sequence to sequence learning with neural networks[C]// Proceedings of the 27th International Conference on Neural Information Processing Systems. New York:ACM Press, 2014: 3104-3112.
|
[3] |
VASWANI A , SHAZEER N , PARMAR N ,et al. Attention is all you need[C]// Proceedings of the 31st International Conference on Neural Information Processing Systems. New York:ACM Press, 2017: 6000-6010.
|
[4] |
KINGMA D P , WELLING M . An introduction to variational autoencoders[J]. Foundations and Trends? in Machine Learning, 2019,12(4): 307-392.
|
[5] |
GOODFELLOW I , POUGET-ABADIE J , MIRZA M ,et al. Generative adversarial networks[J]. Communications of the ACM, 2020,63(11): 139-144.
|
[6] |
LI X L , THICKSTUN J , GULRAJANI I ,et al. Diffusion-LM improves controllable text generation[J]. arXiv preprint, 2022,arXiv:2205.14217.
|
[7] |
DEVLIN J , CHANG M W , LEE K ,et al. BERT:pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint, 2018,arXiv:1810.04805.
|
[8] |
LIU Y H , OTT M , GOYAL N ,et al. RoBERTa:a robustly optimized BERT pretraining approach[J]. arXiv preprint, 2019,arXiv:1907.11692.
|
[9] |
RADFORD A , NARASIMHAN K , Salimans T ,et al. Improving language understanding by generative pretraining[Z]. 2018.
|
[10] |
RADFORD A , WU J , CHILD R ,et al. Language models are unsupervised multitask learners[J]. OpenAI Blog, 2019,1(8).
|
[11] |
BROWN T B , MANN B , RYDER N ,et al. Language models are few-shot learners[C]// Proceedings of the 34th International Conference on Neural Information Processing Systems. New York:ACM Press, 2020: 1877-1901.
|
[12] |
OUYANG L , WU J , JIANG X ,et al. Training language models to follow instructions with human feedback[J]. arXiv preprint, 2022,arXiv:2203.02155.
|
[13] |
LEWIS M , LIU Y H , GOYAL N ,et al. BART:denoising sequence-to-sequence pretraining for natural language generation,translation,and comprehension[C]// Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2020: 7871-7880.
|
[14] |
COLIN R , NOAM S , ADAM R ,et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020,21: 5485-5551.
|
[15] |
LIU A , SAP M , LU X M ,et al. DExperts:decoding-time controlled text generation with experts and anti-experts[J]. arXiv preprint, 2021,arXiv:2105.03023.
|