Telecommunications Science ›› 2023, Vol. 39 ›› Issue (2): 132-144.doi: 10.11959/j.issn.1000-0801.2023021

• Research and Development • Previous Articles     Next Articles

A triple joint extraction method combining hybrid embedding and relational label embedding

Jianfeng DAI, Xingyu CHEN, Ligang DONG, Xian JIANG   

  1. Zhejiang Gongshang University, Hangzhou 310018, China
  • Revised:2023-01-20 Online:2023-02-20 Published:2023-02-01
  • Supported by:
    The National Social Science Foundation of China(17BYY090);Zhejiang Province Key Research and Development Program(2017C03058);Zhejiang Province “Top Soldiers” and “Leading Geese” Project(2023C03202)

Abstract:

The purpose of triple extraction is to obtain relationships between entities from unstructured text and apply them to downstream tasks.The embedding mechanism has a great impact on the performance of the triple extraction model, and the embedding vector should contain rich semantic information that is closely related to the relationship extraction task.In Chinese datasets, the information contained between words is very different, and in order to avoid the loss of semantic information problems generated by word separation errors, a triple joint extraction method combining hybrid embedding and relational label embedding (HEPA) was designed, and a hybrid embedding means that combines letter embedding and word embedding was proposed to reduce the errors generated by word separation errors.A relational embedding mechanism that fuses text and relational labels was added, and an attention mechanism was used to distinguish the relevance of entities in a sentence with different relational labels, thus improving the matching accuracy.The method of matching entities with pointer annotation was used, which improved the extraction effect on relational overlapping triples.Comparative experiments are conducted on the publicly available DuIE dataset, and the F1 value of HEPA is improved by 2.8% compared to the best performing baseline model (CasRel).

Key words: triple extraction, relational embedding, BERT, attention mechanism, pointer annotation

CLC Number: 

No Suggested Reading articles found!