Journal on Communications ›› 2021, Vol. 42 ›› Issue (9): 65-74.doi: 10.11959/j.issn.1000-436x.2021167

• Papers • Previous Articles     Next Articles

Label flipping adversarial attack on graph neural network

Yiteng WU, Wei LIU, Hongtao YU   

  1. Information Engineering University, Zhengzhou 450002, China
  • Revised:2021-08-01 Online:2021-09-25 Published:2021-09-01
  • Supported by:
    Foundation for Innovative Research Groups of The National Natural Science Foundation of China(61521003);The National Key Research and Development Program of China(2016QY03D0502);Zhengzhou City Collaborative Innovation Major Project(162/32410218)

Abstract:

To expand the adversarial attack types of graph neural networks and fill the relevant research gaps, label flipping attack methods were proposed to evaluate the robustness of graph neural network aimed at label noise.The effectiveness mechanisms of adversarial attacks were summarized as three basic hypotheses, contradictory data hypothesis, parameter discrepancy hypothesis and identically distributed hypothesis.Based on the three hypotheses, label flipping attack models were established.Using the gradient oriented attack methods, it was theoretically proved that attack gradients based on the parameter discrepancy hypothesis were the same as gradients of identically distributed hypothesis, and the equivalence between two attack methods was established.Advantages and disadvantages of proposed models based on different hypotheses were compared and analyzed by experiments.Extensive experimental results verify the effectiveness of the proposed attack models.

Key words: graph neural network, adversarial attack, label flipping, attack hypothesis, robustness

CLC Number: 

No Suggested Reading articles found!