通信学报 ›› 2021, Vol. 42 ›› Issue (9): 65-74.doi: 10.11959/j.issn.1000-436x.2021167

• 学术论文 • 上一篇    下一篇

图神经网络的标签翻转对抗攻击

吴翼腾, 刘伟, 于洪涛   

  1. 信息工程大学,河南 郑州 450002
  • 修回日期:2021-08-01 出版日期:2021-09-25 发布日期:2021-09-01
  • 作者简介:吴翼腾(1992− ),男,吉林省吉林市人,信息工程大学博士生,主要研究方向为人工智能安全、对抗机器学习
    刘伟(1992− ),男,河北保定人,信息工程大学硕士生,主要研究方向为人工智能安全、自然语言处理
    于洪涛(1970− ),男,辽宁丹东人,博士,信息工程大学研究员、博士生导师,主要研究方向为大数据和人工智能
  • 基金资助:
    国家自然科学基金创新研究群体基金资助项目(61521003);国家重点研发计划基金资助项目(2016QY03D0502);郑州市协同创新重大专项基金资助项目(162/32410218)

Label flipping adversarial attack on graph neural network

Yiteng WU, Wei LIU, Hongtao YU   

  1. Information Engineering University, Zhengzhou 450002, China
  • Revised:2021-08-01 Online:2021-09-25 Published:2021-09-01
  • Supported by:
    Foundation for Innovative Research Groups of The National Natural Science Foundation of China(61521003);The National Key Research and Development Program of China(2016QY03D0502);Zhengzhou City Collaborative Innovation Major Project(162/32410218)

摘要:

为扩展图神经网络对抗攻击类型以填补相关研究空白,提出了评估图神经网络对标签噪声稳健性的标签翻转对抗攻击方法。将对抗攻击的有效性机理提炼为矛盾数据假设、参数差异假设和同分布假设等3种基本假设,并基于3种假设建立标签翻转对抗攻击模型。采用基于梯度的攻击方法,理论证明了基于参数差异假设模型的攻击梯度与基于同分布假设模型的攻击梯度相同,建立2种攻击方法的等价关系。设计实验对比分析了基于不同假设建立模型的优势和不足;大量实验验证了标签翻转攻击模型的有效性。

关键词: 图神经网络, 对抗攻击, 标签翻转, 攻击假设, 稳健性

Abstract:

To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic hypotheses, contradictory data hypothesis, parameter discrepancy hypothesis and identically distributed hypothesis.Based on the three hypotheses, label flipping attack models were established.Using the gradient oriented attack methods, it was theoretically proved that attack gradients based on the parameter discrepancy hypothesis were the same as gradients of identically distributed hypothesis, and the equivalence between two attack methods was established.Advantages and disadvantages of proposed models based on different hypotheses were compared and analyzed by experiments.Extensive experimental results verify the effectiveness of the proposed attack models.

Key words: graph neural network, adversarial attack, label flipping, attack hypothesis, robustness

中图分类号: 

No Suggested Reading articles found!