网络与信息安全学报 ›› 2022, Vol. 8 ›› Issue (5): 1-25.doi: 10.11959/j.issn.2096-109x.2022063

• 综述 •    下一篇

可解释的知识图谱推理方法综述

夏毅, 兰明敬, 陈晓慧, 罗军勇, 周刚, 何鹏   

  1. 信息工程大学,河南 郑州 450001
  • 修回日期:2022-06-13 出版日期:2022-10-15 发布日期:2022-10-01
  • 作者简介:夏毅(1997- ),男,辽宁丹东人,信息工程大学硕士生,主要研究方向为知识图谱推理
    兰明敬(1982- ),男,安徽颍上人,信息工程大学副教授,主要研究方向为知识图谱
    陈晓慧(1983- ),女,新疆乌鲁木齐人,信息工程大学副教授、博士生导师,主要研究方向为可视化与可视分析
    罗军勇(1964- ),男,江西南昌人,信息工程大学教授、博士生导师,主要研究方向为网络与信息安全
    周刚(1974- ),男,江苏武进人,信息工程大学教授、博士生导师,主要研究方向为数据挖掘
    何鹏(1983- ),女,河南郑州人,信息工程大学博士生,主要研究方向为知识图谱表示学习
  • 基金资助:
    国家自然科学基金(41801313);河南省科技攻关计划(222102210081);河南省科技攻关计划(222300420590)

Survey on explainable knowledge graph reasoning methods

Yi XIA, Mingjng LAN, Xiaohui CHEN, Junyong LUO, Gang ZHOU, Peng HE   

  1. Information Engineering University, Zhengzhou 450001, China
  • Revised:2022-06-13 Online:2022-10-15 Published:2022-10-01
  • Supported by:
    The National Natural Science Foundation of China(41801313);The Science and Technology Program of Henan Province(222102210081);The Science and Technology Program of Henan Province(222300420590)

摘要:

近年来,以深度学习模型为基础的人工智能研究不断取得突破性进展,但其大多具有黑盒性,不利于人类认知推理过程,导致高性能的复杂算法、模型及系统普遍缺乏决策的透明度和可解释性。在国防、医疗、网络与信息安全等对可解释性要求严格的关键领域,推理方法的不可解释性对推理结果及相关回溯造成较大影响,因此,需要将可解释性融入这些算法和系统中,通过显式的可解释知识推理辅助相关预测任务,形成一个可靠的行为解释机制。知识图谱作为最新的知识表达方式之一,通过对语义网络进行建模,以结构化的形式描述客观世界中实体及关系,被广泛应用于知识推理。基于知识图谱的知识推理在离散符号表示的基础上,通过推理路径、逻辑规则等辅助手段,对推理过程进行解释,为实现可解释人工智能提供重要途径。针对可解释知识图谱推理这一领域进行了全面的综述。阐述了可解释人工智能和知识推理相关概念。详细介绍近年来可解释知识图谱推理方法的最新研究进展,从人工智能的3个研究范式角度出发,总结了不同的知识图谱推理方法。提出对可解释的知识图谱推理研究前景和未来研究方向。

关键词: 知识推理, 知识图谱, 可解释人工智能, 信息安全

Abstract:

In recent years, deep learning models have achieved remarkable progress in the prediction and classification tasks of artificial intelligence systems.However, most of the current deep learning models are black box, which means it is not conducive to human cognitive reasoning process.Meanwhile, with the continuous breakthroughs of artificial intelligence in the researches and applications, high-performance complex algorithms, models and systems generally lack the transparency and interpretability of decision making.This makes it difficult to apply the technologies in a wide range of fields requiring strict interpretability, such as national defense, medical care and cyber security.Therefore, the interpretability of artificial intelligence should be integrated into these algorithms and systems in the process of knowledge reasoning.By means of carrying out explicit explainable intelligence reasoning based on discrete symbolic representation and combining technologies in different fields, a behavior explanation mechanism can be formed which is an important way for artificial intelligence to realize data perception to intelligence perception.A comprehensive review of explainable knowledge graph reasoning was given.The concepts of explainable artificial intelligence and knowledge reasoning were introduced briefly.The latest research progress of explainable knowledge graph reasoning methods based on the three paradigms of artificial intelligence was introduced.Specifically, the ideas and improvement process of the algorithms in different scenarios of explainable knowledge graph reasoning were explained in detail.Moreover, the future research direction and the prospect of explainable knowledge graph reasoning were discussed.

Key words: knowledge reasoning, knowledge graph, explainable artificial intelligence, information security

中图分类号: 

No Suggested Reading articles found!