[1]
SINGHAL A. Introducing the knowledge graph: Things, not strings [EB/OL].
[2021-11-19].
https://blog.google/products/search/introducing-knowledge-graph-things-not/.
[2]
J DEVLIN, M CHANG, K LEE, et al. BERT: Pre-training of deep bidirectional
transformers for language understanding[J]. arXiv preprint arXiv: 1810.04805,
2018.
[3]
HOWE J. The Rise of Crowdsourcing[J]. Wired, 2006, 14(6): 176-183.
[4]
HOLLEY R. Crowdsourcing: How and Why Should Libraries Do It?[J]. D-Lib
Magazine, 2010, 16(3/4): 1-21.
[5]
OOMEN J, AROYO L. Crowdsourcing in the cultural heritage domain:
opportunities and challenges[C]//Proceedings of the 5th International Conference on Communities and
Technologies, 2011, Brisbane, Australia. New York: Association for Computing
Machinery. 2011: 138-149.
[6]
TERRAS M. Digital curiosities: resource creation via amateur
digitization[J]. Literary and linguistic computing, 2010, 25(4): 425-438.
[7]
RIDGE M. Citizen History and Its Discontents[C]//Proceedings of the 2014 IHR Digital History
Seminar. Humanities Commons, 18 November 2014, London. 2014: 1-13.
[8]
ZHANG X, SONG S, ZHAO Y, et al. Motivations of volunteers in the Transcribe
Sheng project: a grounded theory approach[J]. Proceedings of the Association
for Information Science and Technology, 2018, 55(1): 951-953.
[9]
RIDGE M. From tagging to theorizing: deepening engagement with cultural
heritage through crowdsourcing[J]. Curator: The Museum Journal, 2013, 56(4):
435-450.
[10]
DANIELS C, HOLTZE T L, HOWARD R I, et al. Community as resource:
Crowdsourcing transcription of an historic newspaper[J]. Journal of electronic
resources librarianship, 2014, 26(1): 36-48.
[11]
CONCILIO G, VITELLIO I. Co-creating intangible cultural heritage by
crowd-mapping: The case of mappi [na][C]//2016 IEEE 2nd International Forum on Research and
Technologies for Society and Industry Leveraging a better tomorrow (RTSI), 14
November 2016, Bologna, Italy. Bologna: IEEE, 2016: 1-5.
[12]
D E RUMELHART, G E HINTON, R J WILLIAMS. Learning internal representations
by back-propagating errors[J]. Nature, 1986, 323(6088): 533-536.
[13]
G E HINTON, J L MCCLELLAND, D E RUMELHART. Distributed representations[M]. Cambridge:
MIT Press, 1986: 77-109.
[14]
MIKOLOV T, CHEN K, CORRADO G, et al. Efficient estimation of word
representations in vector space[J]. arXiv preprint arXiv: 1301.3781, 2013.
[15]
MCCANN B, BRADBURY J, XIONG C, et al. Learned in translation:
Contextualized word vectors[C]//Proceedings of the 31st International Conference on Neural Information
Processing Systems, 2017, Long Beach, California, USA. Red Hook: Curran
Associates Inc, 2017: 6297-6308.
[16]
M E PETERS, M NEUMANN, M IYYER, et al. Deep contextualized word representations[C]//Proceedings of the 2018 Conference of the North
American Chapter of the Association for Computational Linguistics: Human
Language Technologies, June 2018, New Orleans, Louisiana. New York: Association
for Computational Linguistics, 2018: 2227-2237.
[17]
VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[J].
Advances in neural information processing systems, 2017, 30.
[18]
STAUDEMEYER R C, MORRIS E R. Understanding LSTM--a tutorial into long
short-term memory recurrent neural networks[J]. arXiv preprint arXiv:
1909.09586, 2019.
[19]
RADFORD A, NARASIMHAN K, SALIMANS T, et al. Improving language
understanding by generative pre-training[J]. 2018.
[20]
MA X, HOVY E. End-to-end sequence labeling via bi-directional
lstm-cnns-crf[J]. arXiv preprint arXiv: 1603.01354, 2016.
[21]
LIU Y, OTT M, GOYAL N, et al. Roberta: A robustly optimized bert
pretraining approach[J]. arXiv preprint arXiv: 1907.11692, 2019.
[22]
LAN Z, CHEN M, GOODMAN S, et al. Albert: A lite bert for self-supervised
learning of language representations[J]. arXiv preprint arXiv: 1909.11942,
2019.
[23]
YANG Z, DAI Z, YANG Y, et al. Xlnet: Generalized autoregressive pretraining
for language understanding[C]//Proceedings of the 33rd International Conference on Neural Information
Processing Systems, 2019, Red Hook, NY, USA. Red Hook: Curran Associates Inc,
2019: 5753-5763.
[24]
CUI Y, CHE W, LIU T, et al. Pre-training with whole word masking for
chinese bert[J]. IEEE/ACM Transactions on Audio, Speech, and Language
Processing, 2021, 29: 3504-3514.
[25]
王东波,刘畅,朱子赫,等. SikuBERT与SikuRoBERTa:面向数字人文的《四库全书》预训练模型构建及应用研究[J/OL]. 图书馆论坛: 1-14[2022-03-10].
WANG D B, LIU C, ZHU Z H, et al. Construction and Application of
Pre-training Model of “Siku Quanshu” Oriented to Digital Humanities. [J/OL].
Library Tribune:1-14[2022-03-10].
|