[1] |
KOUYLEKOV M , MAGNINI B . Recognizing textual entailment with tree edit distance algorithms[J]. Pascal Challenges on RTE, 2005: 17-20.
|
[2] |
BOS J , MARKERT K . Recognizing textual entailment with robust logical inference[C]// The Conference Human Language Technology and Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2005: 404-426.
|
[3] |
BHASKAR P , BANERJEE S , PAKRAY P ,et al. A hybrid question answering system for multiple choice question(MCQ)[C]// QA4MRE at Conference and Labs of the Evaluation Forum.[S.l.:s.n]. 2013.
|
[4] |
ROMANO L , KOUYLEKOV M , SZPEKTOR I ,et al. Investigating a generic paraphrase-based approach for relation extraction[C]// The 11th Conference of the European Chapter of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2006: 409-416.
|
[5] |
PADó S , CER D , GALLEY M ,et al. Measuring machine translation quality as semantic equivalence:a metric based on entailment features[J]. Machine Translation, 2009,23(2-3): 181-193.
|
[6] |
HARABAGIU S , HICKL A . Methods for using textual entailment in open-domain question answering[C]// The 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2006: 905-912.
|
[7] |
HARABAGIU S , HICKL A , LACATUSU F . Satisfying information needs with multi-document summaries[J]. Information Processing and Management, 2007,43(6): 1619-1642.
|
[8] |
HEILMAN M , SMITH N A . Tree edit models for recognizing textual entailments,paraphrases,and answers to questions[C]// Human Language Technologies:Conference of the North American Chapter of the Association of Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2010: 1011-1019.
|
[9] |
BOWMAN S R , ANGELI G , POTTS C ,et al. A large annotated corpus for learning natural language inference[C]// The 2015 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2015: 632-642.
|
[10] |
MOU L L , MEN R , LI G ,et al. Natural language inference by tree-based convolution and heuristic matching[C]// The 54th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2016: 130-136.
|
[11] |
PARIKH A P , TACKSTROM O , DAS D ,et al. A decomposable attention model for natural language inference[C]// The 2016 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2016: 2249-2255.
|
[12] |
CHEN Q , ZHU X , LING Z H ,et al. Enhanced LSTM for natural language inference[C]// The 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2017: 1657-1668.
|
[13] |
TAY Y , LUU A T , HUI S C ,et al. Compare,compress and propagate:enhancing neural architectures with alignment factorization for natural language inference[C]// The 2018 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2018: 1565-1575.
|
[14] |
POLIAK A , NARADOWSKY J , HALDAR A ,et al. Hypothesis only baselines in natural language inference[C]// The 7th Joint Conference on Lexical and Computational Semantics.[S.l.:s.n. ], 2018.
|
[15] |
PETERS M E , NEUMANN M , IYYER M ,et al. Deep contextualized word representations[C]// The 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg:Association for Computational Linguistics, 2018: 2227-2237.
|
[16] |
WANG Z , HAMZA W , FLORIAN R ,et al. Bilateral multi-perspective matching for natural language sentences[C]// The 26th International Joint Conference on Artificial Intelligence. Los Altos:William Kaufman, 2017: 4144-4150.
|
[17] |
VASWANI A , SHAZEER N , PARMAR N ,et al. Attention is all you need[C]// The 31st Conference on Neural Information Processing System (NIPS 2017).[S.l.:s.n]. 2017.
|
[18] |
DEVLIN J , CHANG M , LEE K ,et al. BERT:Pre-training of deep bidirectional transformers for language understanding[C]// The 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg:Association for Computational Linguistics, 2019: 4171-4186.
|
[19] |
DU Q L , SU K Y , ZONG C Q . Adopting the word-pair-dependencytriplets with individual comparison for natural language inference[C]// The 27th International Conference on Computational Linguistics (COLING). Stroudsburg:Association for Computational Linguistics, 2018: 414-425.
|
[20] |
PENNINGTON J , SOCHER R , MANNING C D . GloVe:global vectors for word representation[C]// The 2014 Conference on Empirical Methods in Natural Language Processing. Stroudsburg:Association for Computational Linguistics, 2014: 1532-1543.
|
[21] |
CHEN D , FISCH A , WESTON J ,et al. Reading Wikipedia to answer open-domain questions[C]// The 55th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2017: 1870-1879.
|
[22] |
WILLIAMS A , NANGIA N , BOWMAN S R ,et al. A broad-coverage challenge corpus for sentence understanding through inference[C]// The 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg:Association for Computational Linguistics, 2018: 1112-1122.
|
[23] |
GLOCKNER M , SHWARTZ V , GOLDBERG Y . Breaking NLI systems with sentences that require simple lexical inferences[C]// The 56th Annual Meeting of the Association for Computational Linguistic. Stroudsburg:Association for Computational Linguistics, 2018: 650-655.
|
[24] |
WANG S , JIANG J . Learning natural language inference with LSTM[C]// The 2016 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg:Association for Computational Linguistics, 2016: 1442-1451.
|
[25] |
SHA L , CHANG B B , SUI Z F ,et al. Reading and thinking:reread LSTM unit for textual entailment recognition[C]// The 26th International Conference on Computational Linguistics (COLING). Stroudsburg:Association for Computational Linguistics, 2016: 2870-2879.
|
[26] |
GONG Y , LUO H , ZHANG J ,et al. Natural language inference over interaction space[C]// The 6th International Conference On Learning Representations.[S.l.:s.n]. 2018.
|
[27] |
GHAEINI R , HASAN S A , DATLA V ,et al. DR-BiLSTM:dependent reading bidirectional LSTM for natural lnaguage inference[C]// The 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg:Association for Computational Linguistics, 2018: 1460-1469.
|
[28] |
CHEN Q , ZHU X D , LING Z H ,et al. Neural natural language inference models enhanced with external knowledge[C]// The 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg:Association for Computational Linguistics, 2018: 2406-2417.
|
[29] |
GURURANGAN S , SWAYAMDIPTA S , LEVY O ,et al. Annotation artifacts in natural language inference data[C]// The 2018 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Stroudsburg:Association for Computational Linguistics, 2018: 107-112.
|
[30] |
SRIVASTAVA N , HINTON G , KRIZHEVSKY A ,et al. Dropout:a simple way to prevent neural network from overfitting[J]. The Journal of Machine Learning Research, 2014,15(1): 1929-1958.
|
[31] |
BALAZS J A , MARRESE-TAYLOR E , LOYOLA P ,et al. Refing raw sentence representations for textual entailment recognition via attention[C]// The 2nd Workshop on Evaluating Vector Space Representations for NLP. Stroudsburg:Association for Computational Linguistics, 2017: 51-55.
|
[32] |
CHEN Q , ZHU X D , LING Z H ,et al. Recurrent neural network-based sentence encoder with gated attention for natural language inference[C]// The 2nd Workshop on Evaluating Vector Space Representation for NLP. Stroudsburg:Association for Computational Linguistics, 2017: 36-40.
|
[33] |
NIE Y , BANSAL M . Shortcut-stacked sentence encoders for multi-domain inference[C]// The 2nd workshop on Evaluating Vector Space Representations for NLP. Stroudsburg:Association for Computational Linguistics, 2017: 41-45.
|