[1] |
ELISSEEFF A , WESTON J . A kernel method for multi-labelled classification[C]// Proceedings of the 14th International Conference on Neural Information Processing Systems:Natural and Synthetic. Cambridge:MIT Press, 2001: 681-687.
|
[2] |
GHAMRAWI N , MCCALLUM A . Collective multi-label classification[C]// Proceedings of the 14th ACM International Conference on Information and Knowledge Management. New York:ACM Press, 2005: 195-200.
|
[3] |
LI C , WANG B , PAVLU V ,et al. Conditional bernoulli mixtures for multilabel classification[C]// Proceedings of the 2016 International Conference on Machine Learning.[S.l.:s.n.], 2016: 2482-2491.
|
[4] |
KIM Y . Convolutional neural networks for sentence classification[J]. arXiv preprint,2014,arXiv:1408.5882.
|
[5] |
CONNEAU A , SCHWENK H , BARRAULT L ,et al. Very deep convolutional networks for text classification[J]. arXiv preprint,2016,arXiv:1606.01781.
|
[6] |
SUN X , MA X H , NI Z W ,et al. A new LSTM network model combining TextCNN[C]// Proceedings of the 2018 International Conference on Neural Information Processing. Heidelberg:Springer, 2018: 416-424.
|
[7] |
LIN J Y , SU Q , YANG P C ,et al. Semantic-unit-based dilated convolution for multi-label text classification[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2018.
|
[8] |
PETERS M E , NEUMANN M , IYYER M ,et al. Deep contextualized word representations[J]. arXiv preprint,2018,arXiv:1802.05365.
|
[9] |
RADFORD A , NARASIMHAN K , SALIMANS T ,et al. Improving language understanding by generative pretraining[Z]. 2018.
|
[10] |
DEVLIN J , CHANG M W , LEE K ,et al. BERT:pre-training of deep bidirectional transformers for language understanding[J]. arXiv preprint,2018,arXiv:1810.04805.
|
[11] |
YANG Z L , DAI Z H , YANG Y M ,et al. XLNet:generalized autoregressive pretraining for language understanding[J]. arXiv preprint,2019,arXiv:1906.08237.
|
[12] |
LUO B F , FENG Y S , XU J B ,et al. Learning to predict charges for criminal cases with legal basis[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2017: 2727-2736.
|
[13] |
ZHONG H X , GUO Z P , TU C C ,et al. Legal judgment prediction via topological learning[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2018: 3540-3549.
|
[14] |
陈文哲, 秦永彬, 黄瑞章 ,等. 基于犯罪行为序列的法律条文预测方法[J]. 计算机工程与应用, 2019,55(22): 245-249,264.
|
|
CHEN W Z , QIN Y B , HUANG R Z ,et al. Legal text prediction method based on criminal behavior sequence[J]. Computer Engineering and Applications, 2019,55(22): 245-249,264.
|
[15] |
秦永彬, 冯丽, 陈艳平 ,等. “智慧法院”数据融合分析与集成应用[J]. 大数据, 2019,5(3): 35-46.
|
|
QIN Y B , FENG L , CHEN Y P ,et al. “Intelligent Court” data fusion analysis and integrated application[J]. Big Data Research, 2019,5(3): 35-46.
|
[16] |
Natural Language Computing Group,Microsoft Research Asia. R-NET:machine reading comprehension with self-matching networks[R]. 2017.
|
[17] |
SEO M , KEMBHAVI A , FARHADI A ,et al. Bidirectional attention flow for machine comprehension[J]. arXiv preprint,2016,arXiv:1611.01603.
|
[18] |
LEVY O , SEO M , CHOI E ,et al. Zeroshot relation extraction via reading comprehension[J]. arXiv preprint,2017,arXiv:1706.04115.
|
[19] |
MCCANN B , KESKAR N S , XIONG C M ,et al. The natural language decathlon:multitask learning as question answering[J]. arXiv preprint,2018,arXiv:1806.08730.
|
[20] |
LI X Y , YIN F , SUN Z J ,et al. Entityrelation extraction as multi-turn question answering[J]. arXiv preprint,2019,arXiv:1905.05529.
|
[21] |
刘奕洋, 余正涛, 高盛祥 ,等. 基于机器阅读理解的中文命名实体识别方法[J]. 模式识别与人工智能, 2020,33(7): 653-659.
|
|
LIU Y Y , YU Z T , GAO S X ,et al. Chinese named entity recognition method based on machine reading comprehension[J]. Pattern Recognition and Artificial Intelligence, 2020,33(7): 653-659.
|
[22] |
VASWANI A , SHAZEER N , PARMAR N ,et al. Attention is all you need[C]// Proceedings of the 31st Conference on Neural Information Processing Systems.[S.l:s.n.], 2017: 5998-6008.
|
[23] |
SOCHER R , PERELYGIN A , WU J ,et al. Recursive deep models for semantic compositionality over a sentiment treebank[C]// Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2013: 1631-1642.
|
[24] |
ZHOU P , SHI W , TIAN J ,et al. Attentionbased bidirectional long short-term memory networks for relation classification[C]// Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2:Short Papers). Stroudsburg:Association for Computational Linguistics, 2016: 207-212.
|
[25] |
LAN Z Z , CHEN M D , GOODMAN S ,et al. ALBERT:a lite BERT for self-supervised learning of language representations[J]. arXiv preprint,2019,arXiv:1909.11942.
|
[26] |
SUN Y , WANG S H , LI Y K ,et al. ERNIE:enhanced representation through knowledge integration[J]. arXiv preprint,2019,arXiv:1904.09223.
|
[27] |
CUI Y M , CHE W X , LIU T ,et al. Pretraining with whole word masking for Chinese BERT[J]. arXiv preprint,2019,arXiv:1906.08101.
|