通信学报 ›› 2020, Vol. 41 ›› Issue (3): 33-44.doi: 10.11959/j.issn.1000-436x.2020041
李洪均1,2,3,4,李超波1,张士兵1
修回日期:
2020-01-07
出版日期:
2020-03-25
发布日期:
2020-03-31
作者简介:
李洪均(1981- ),男,江苏南通人,博士,南通大学副教授、硕士生导师,主要研究方向为图像处理、模式识别和人工智能|李超波(1995- ),女,山西大同人,南通大学硕士生,主要研究方向为计算机视觉和深度学习|张士兵(1962- ),男,江苏南通人,博士,南通大学教授、博士生导师,主要研究方向为无线通信、智能信号处理、机器学习和认知无线电
基金资助:
Hongjun LI1,2,3,4,Chaobo LI1,Shibing ZHANG1
Revised:
2020-01-07
Online:
2020-03-25
Published:
2020-03-31
Supported by:
摘要:
针对不同分布噪声下生成对抗网络生成样本质量差异明显的问题,提出了一种噪声稳健性的卡方生成对抗网络。所提网络结合了卡方散度量化敏感性和稀疏不变性的优势,引入卡方散度计算生成样本分布和真实样本分布的距离,减小不同噪声对生成样本的影响且降低对真实样本的质量要求;搭建了网络架构,构建全局优化目标函数,促进网络不断优化并增强博弈的有效性。实验结果表明,所提网络在不同噪声下的生成样本质量和稳健性优于目前几种主流网络,且图像质量差异较小。卡方散度的引入不仅提高了生成样本质量,而且提升了网络在不同噪声下的稳健性。
中图分类号:
李洪均,李超波,张士兵. 噪声稳健性的卡方生成对抗网络[J]. 通信学报, 2020, 41(3): 33-44.
Hongjun LI,Chaobo LI,Shibing ZHANG. Noise robust chi-square generative adversarial network[J]. Journal on Communications, 2020, 41(3): 33-44.
表1
CIFAR-10不同噪声分布下各网络生成样本的IS值"
噪声分布 | GAN | LSGAN | WGAN | CSGAN | |||||||
均值 | 最大值 | 均值 | 最大值 | 均值 | 最大值 | 均值 | 最大值 | ||||
标准高斯N(0,1) | 5.04 | 5.21 | 5.17 | 5.47 | 5.37 | 5.51 | 5.52 | 5.77 | |||
正态分布N(0,0.01) | 4.47 | 4.83 | 5.10 | 5.40 | 5.27 | 5.53 | 5.44 | 5.72 | |||
截断高斯分布 | 4.87 | 5.07 | 5.11 | 5.30 | 5.11 | 5.42 | 5.49 | 5.74 | |||
均匀分布U(0,1) | 5.14 | 5.39 | 5.12 | 5.41 | 5.47 | 5.77 | 5.43 | 5.72 | |||
泊松分布P(1) | 4.42 | 4.75 | 5.00 | 5.27 | 5.36 | 5.51 | 5.53 | 5.84 | |||
伽马分布Ga(0,1) | 3.69 | 4.02 | 4.91 | 5.16 | 5.22 | 5.50 | 5.37 | 5.59 |
表2
不同噪声分布下生成图像IS值随迭代次数的变化情况"
迭代次数/103次 | 标准高斯N(O,1) | 正态分布N(0,0.01) | 截断高斯分布 | 均匀分布U(0,1) | 泊松分布P(l) | 伽马分布Ga(0,l) | |||||||||||||||||||||||
(GAN | LSGAN | WGAN | CSGAN | GAN | LSGAN | WGAN | CSGAN | GAN | LSGAN | WGAN | CSGAN | GAN | LSGAN | WGAN | CSGAN | GAN | LSGAN | WGAN | CSGAN | CiAN | LSGAN | WGAN | CSGAN | ||||||
0~10 | 2.24 | 2.32 | 2.24 | 2.13 | 2.23 | 2.31 | 2.32 | 2.37 | 2.49 | 2.18 | 2.44 | 2.21 | 2.13 | 2.24 | 2.21 | 2.20 | 2.30 | 2.32 | 2.26 | 2.23 | 2.53 | 2.32 | 2.23 | 2.38 | |||||
11~20 | 2.99 | 2.68 | 2.68 | 2.90 | 2.98 | 2.73 | 2.77 | 2.64 | 3.11 | 2.68 | 2.71 | 2.60 | 2.63 | 2.80 | 2.52 | 2.59 | 2.89 | 2.84 | 2.66 | 2.79 | 3.26 | 2.78 | 2.50 | 2.72 | |||||
21~30 | 3.30 | 2.91 | 2.79 | 2.99 | 3.29 | 3.06 | 3.04 | 2.84 | 3.20 | 2.94 | 2.89 | 3.00 | 3.16 | 2.97 | 2.68 | 2.87 | 3.01 | 3.04 | 2.77 | 2.88 | 3.07 | 2.94 | 2.54 | 2.84 | |||||
31~40 | 3.38 | 3.20 | 2.97 | 3.24 | 3.69 | 3.46 | 3.40 | 3.20 | 4.20 | 3.26 | 3.21 | 3.33 | 3.84 | 3.10 | 2.83 | 3.20 | 3.24 | 3.47 | 3.03 | 3.25 | 3.39 | 3.27 | 2.69 | 2.84 | |||||
41~50 | 4.00 | 3.66 | 3.28 | 3.76 | 3.74 | 3.73 | 3.75 | 3.64 | 3.46 | 3.44 | 3.68 | 3.37 | 3.73 | 3.39 | 3.22 | 3.42 | 3.83 | 3.69 | 3.41 | 3.35 | 3.51 | 3.59 | 2.90 | 3.29 | |||||
51~60 | 3.38 | 4.05 | 3.60 | 4.02 | 3.18 | 4.03 | 4.16 | 3.94 | 3.39 | 3.74 | 3.77 | 3.58 | 3.58 | 3.61 | 3.50 | 3.72 | 4.01 | 3.92 | 3.74 | 3.66 | 3.61 | 3.97 | 3.13 | 3.41 | |||||
61~70 | 3.50 | 4.21 | 3.59 | 4.07 | 4.01 | 4.29 | 4.26 | 4.14 | 3.66 | 3.93 | 3.81 | 3.71 | 3.69 | 3.70 | 3.67 | 4.03 | 4.24 | 4.21 | 3.95 | 4.05 | 3.94 | 4.09 | 3.51 | 3.70 | |||||
71~80 | 3.68 | 4.49 | 3.73 | 4.21 | 4.33 | 4.45 | 4.35 | 4.43 | 3.93 | 4.02 | 3.86 | 4.11 | 4.24 | 3.96 | 3.94 | 4.36 | 4.27 | 4.36 | 4.30 | 4.32 | — | 4.23 | 3.89 | 3.96 | |||||
81~90 | 4.01 | 4.56 | 4.13 | 4.45 | 3.99 | 4.60 | 4.50 | 4.78 | 4.05 | 4.16 | 4.03 | 4.41 | 4.45 | 4.18 | 4.28 | 4.64 | 4.75 | 4.40 | 4.53 | 4.78 | — | 4.41 | 4.30 | 4.37 | |||||
91~100 | 4.32 | 4.71 | 4.42 | 4.61 | 4.18 | 4.66 | 4.58 | 4.94 | 4.72 | 4.43 | 4.17 | 4.78 | 4.44 | 4.39 | 4.60 | 4.84 | — | 4.55 | 4.81 | 5.04 | — | 4.49 | 4.67 | 4.79 | |||||
101~110 | 4.49 | 4.83 | 4.78 | 4.81 | 4.47 | 4.73 | 4.74 | 5.12 | 4.42 | 4.53 | 4.33 | 5.00 | 3.99 | 4.63 | 4.90 | 5.01 | — | 4.56 | 4.94 | 5.17 | — | 4.60 | 4.86 | 4.98 | |||||
111~120 | 4.70 | 4.91 | 4.92 | 5.01 | 4.77 | 4.79 | 4.83 | 5.31 | 4.58 | 4.64 | 4.48 | 5.08 | 4.74 | 4.73 | 5.07 | 5.19 | — | 4.61 | 5.05 | 5.21 | — | 4.69 | 4.96 | 5.10 | |||||
121~130 | 4.73 | 5.03 | 5.00 | 5.14 | — | 4.85 | 4.97 | 5.34 | 4.80 | 4.76 | 4.63 | 5.20 | 4.67 | 4.83 | 5.24 | 5.28 | — | 4.76 | 5.03 | 5.31 | — | 4.79 | 4.93 | 5.18 | |||||
131~140 | 4.84 | 5.02 | 5.07 | 5.29 | — | 4.95 | 5.02 | 5.44 | 4.97 | 4.86 | 4.76 | 5.29 | 4.82 | 4.99 | 5.30 | 5.33 | — | 4.75 | 5.05 | 5.27 | — | 4.82 | 5.03 | 5.23 | |||||
141~150 | 5.07 | 5.08 | 5.16 | 5.33 | — | 5.00 | 5.12 | 5.44 | 4.84 | 4.99 | 4.89 | 5.35 | 4.68 | 4.94 | 5.35 | 5.36 | — | 4.85 | 5.14 | 5.37 | — | 4.80 | 5.10 | 5.25 | |||||
151~160 | 5.07 | 5.12 | 5.26 | 5.43 | — | 5.02 | 5.15 | 5.39 | — | 5.02 | 5.01 | 5.37 | 5.10 | 5.01 | 5.32 | 5.45 | — | 4.85 | 5.21 | 5.47 | — | 4.90 | 5.05 | 5.26 | |||||
161~170 | 4.90 | 5.16 | 5.33 | 5.43 | — | 5.08 | 5.17 | 5.43 | — | 5.10 | 5.01 | 5.38 | 5.16 | 5.06 | 5.39 | 5.42 | — | 4.97 | 5.26 | 5.42 | — | 4.95 | 5.10 | 5.31 | |||||
171~180 | 4.92 | 5.14 | 5.36 | 5.44 | — | 5.17 | 5.19 | 5.36 | — | 5.10 | 5.06 | 5.47 | 5.01 | 5.07 | 5.46 | 5.44 | — | 4.94 | 5.33 | 5.48 | — | 4.86 | 5.17 | 5.32 | |||||
181~190 | 5.08 | 5.18 | 5.4C | 5.52 | — | 5.04 | 5 30 | 5.49 | — | 5.10 | 5.10 | 5.45 | 5.18 | 5.12 | 5.49 | 5.40 | — | 5.02 | 5.37 | 5.54 | — | 4.91 | 5.24 | 5.39 | |||||
191~200 | 5.11 | 5 19 | 5.35 | 5.59 | — | 5.08 | 5.32 | 5.48 | — | 5.15 | 5.18 | 5.54 | 5.24 | 5.16 | 5.45 | 5.46 | — | 5.04 | 5.38 | 5.57 | — | 4.98 | 5.24 | 5.39 |
[1] | CHOI H Y . Deep learning in nuclear medicine and molecular imaging:current perspectives and future directions[J]. Nuclear Medicine and Molecular Imaging, 2017,52(2): 109-118. |
[2] | XING F Y , XIE Y P , SU H ,et al. Deep learning in microscopy image analysis:a survey[J]. IEEE Transactions on Neural Networks and Learning Systems, 2018,29(10): 4550-4568. |
[3] | 王珠珠 . 基于 U 型检测网络的图像篡改检测算法[J]. 通信学报, 2019,40(4): 171-178. |
WANG Z Z . Image forgery detection algorithm based on U-shaped de-tection network[J]. Journal on Communications, 2019,40(4): 171-178. | |
[4] | NARDELLI P , JIMENEZ-CARRETERO D , BERMEJO-PELAEZ D ,et al. Pulmonary artery-vein classification in CT images using deep learning[J]. IEEE Transactions on Medical Imaging, 2018,37(11): 2428-2440. |
[5] | WU B , LI K H , GE F P ,et al. An end-to-end deep learning approach to simultaneous speech dereverberation and acoustic modeling for robust speech recognition[J]. IEEE Journal of Selected Topics in Signal Processing, 2017,11(8): 1289-1300. |
[6] | WANG D L , CHEN J T . Supervised speech separation based on deep learning:an overview[J]. IEEE Transactions on Audio,Speech,and Language Processing, 2018,26(10): 1702-1726. |
[7] | TOM Y , DEVAMANYU H , SOUJANYA P ,et al. Recent trends in deep learning based natural language processing[J]. IEEE Computational Intelligence Magazine, 2018,13(3): 55-75. |
[8] | LI H . Deep learning for natural language processing:advantages and challenges[J]. National Science Review, 2018,5(1): 28-30. |
[9] | ZHANG Q C , YANG L T , CHEN Z K . Privacy preserving deep computation model on cloud for big data feature learning[J]. IEEE Transactions on Computers, 2016,65(5): 1351-1362. |
[10] | PENG L , PENG M M , LIAO B ,et al. The advances and challenges of deep learning application in biological big data processing[J]. Current Bioinformatics, 2018,13(4): 352-359. |
[11] | ROUX N L , BENGIO Y . Representational power of restricted boltzmann machines and deep belief networks[J]. Neural Computation, 2008,20(6): 1631-1649. |
[12] | HINTON G E , OSINDERO S , TEH Y W . A faster learning algorithm for deep belief nets[J]. Neural Computation, 2014,18(7): 1527-1554. |
[13] | HOCHREITER S , SCHMIDHUBER J . Feature extraction through lococode[J]. Neural Computation, 1999,11(3): 679-714. |
[14] | GOODFELLOW I J , POUGET-ABADIE J , MIRZA M ,et al. Generative adversarial nets[C]// International Conference on Neural Information Processing Systems. Berlin:Springer, 2014: 2672-2680. |
[15] | PHILIP C , JONG L H . Face sketch synthesis using conditional adversarial networks[C]// IEEE 2017 International Conference on Information and Communication Technology Convergency. Piscataway:IEEE Press, 2017: 373-378. |
[16] | GALDRAN A , MEYER M I , COSTA P ,et al. End-to-end adversarial retinal image synthesis[J]. IEEE Transactions on Medical Imaging, 2018,37(3): 781-791. |
[17] | HUANG Q , JACKSON P J B , PLUMBLEY M D ,et al. Synthesis of images by two-stage generative adversarial networks[C]// 2018 IEEE International Conference on Acoustics,Speech and Signal Processing. Piscataway:IEEE Press, 2018: 1593-1597. |
[18] | QIN X , CHEN W , SHEN Q ,et al. Image inpainting:a contextual consistent and deep generative adversarial training approach[C]// Asian Conference on Pattern Recognition. Nanjing, 2017: 588-593. |
[19] | CHEN Y , CHEN W , WEI C ,et al. Occlusion-aware face inpainting via generative adversarial networks[C]// IEEE International Conference on Image Processing. Piscataway:IEEE Press, 2017: 1202-1206. |
[20] | WANG W , HUANG Q , YOU S ,et al. Shape inpainting using 3D generative adversarial network and recurrent convolutional networks[C]// 2017 IEEE International Conference on Computer Vision. Piscataway:IEEE Press, 2017: 2317-2325. |
[21] | GONG M , YANG Y , ZHAN T ,et al. A generative discriminatory classified network for change detection in multispectral imagery[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2019,12(1): 321-333. |
[22] | ZHU L , CHEN Y , GHAMISI P ,et al. Generative adversarial networks for hyperspectral image classification[J]. IEEE Transactions on Geoscience and Remote Sensing, 2018,56(9): 5046-5063. |
[23] | ZHAN Y , HU D , WANG Y ,et al. Semisupervised hyperspectral image classification based on generative adversarial networks[J]. IEEE Geoscience and Remote Sensing Letters, 2018,15(2): 212-216. |
[24] | CHEN X , XU C , YANG X ,et al. Gated-GAN:adversarial gated networks for multi-collection style transfer[J]. IEEE Transactions on Image Processing, 2019,28(2): 546-560. |
[25] | KIM T , CHA M , KIM H ,et al. Learning to discover cross-domain relations with generative adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1703.05192v1, 2017 |
[26] | ZHU J Y , PARK T , ISOLA P ,et al. Unpaired image-to-image translation using cycle-consistent adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1703.10593v6, 2017 |
[27] | ADIGA S , ATTIA M A , CHANG W ,et al. On the tradeoff between mode collapse and sample quality in generative adversarial networks[C]// IEEE Global Conference on Signal and Information Processing. Piscataway:IEEE Press, 2018: 1184-1188. |
[28] | 王万良, 李卓蓉 . 生成式对抗网络研究进展[J]. 通信学报, 2018,39(2): 135-148. |
WANG W L , LI Z R . Advances in generative adversarial network[J]. Journal on Communications, 2018,39(2): 135-148. | |
[29] | VITORIA P , SINTES J , BALLESTER C . Semantic image inpainting through improved Wasserstein generative adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1812.01071v1, 2018 |
[30] | FU Y C , LIU Y . BubGAN:bubble generative adversarial networks for synthesizing realistic bubbly flow images[J]. arXiv Preprint,arXiv:arXiv 1809.02266v1, 2018 |
[31] | RADFORD A , METZ L , CHINTALA S . Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1611.02163, 2015 |
[32] | SALIMANS T , GOODFELLOW I , ZAREMBA W ,et al. Improved techniques for training GANs[C]// Advances in Neural Information Processing Systems. Barcelona:NIPS, 2016: 2234-2242. |
[33] | ARJOVSKY M , BOTTOU L . Towards principled methods for training generative adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1701.04862, 2017 |
[34] | ARJOVSKY M , CHINTALA S , BOTTOU L . Wasserstein GAN[J]. arXiv:arXiv 1701.07875, 2017 |
[35] | GHOSH A , KULHARIA V , NAMBOODIRI V P ,et al. Multi-agent diverse generative adversarial networks[C]// 2018 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018: 8513-8521. |
[36] | MAO X , LI Q , XIE H ,et al. Least squares generative adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1611.04076v3, 2017 |
[37] | CHEN B , LIU T , LIU K ,et al. Image super-resolution using complex dense block on generative adversarial networks[C]// 2019 IEEE International Conference on Image Processing. Piscataway:IEEE Press, 2019: 2866-2870. |
[38] | TAN W R , CHAN C S , AGUIRRE H ,et al. Improved ArtGAN for conditional synthesis of natural image and artwork[J]. IEEE Transactions on Image Processing, 2019,28(1): 394-409. |
[39] | KANCHARLA P , CHANNAPPAYYA S S . Improving the visual quality of generative adversarial network (GAN)-generated images using the multi-scale structural similarity index[C]// 25th IEEE International Conference on Image Processing. Piscataway:IEEE Press, 2018: 3908-3912. |
[40] | MASCIALINO B , PFEIFFER A , PIA M G ,et al. Evaluation of the power of goodness-of-fit tests for the comparison of data distributions[C]// IEEE Nuclear Science Symposium Conference Record. Piscataway:IEEE Press, 2006: 101-103. |
[41] | PELE O , WERMAN M . The quadratic-chi histogram distance family[C]// European Conference on Computer Vision. Berlin:Sringer, 2010: 749-762. |
[42] | MIRZA M , OSINDERO S . Conditional generative adversarial nets[J]. arXiv Preprint,arXiv:1411.1784v1, 2014 |
[43] | RADFORD A , METZ L , CHINTALA S . Unsupervised representation learning with deep convolutional generative adversarial networks[J]. arXiv Preprint,arXiv:arXiv 1511.06434v2, 2016 |
[44] | ZHAO J , MATHIEU M , LECUN Y . Energy-based generative adversarial network[J]. arXiv Preprint,arXiv:arXiv 1609.03126v4, 2017 |
[45] | 乔韡韡, 吴成茂 . 基于卡方散度阈值方法的图像分割研究与实现[J]. 计算机应用与软件, 2008,25(10): 78-81. |
QIAO W W , WU C M . Study on image segmentation of image thre-sholding method based on chi-square divergence and its realization[J]. Computer Applications and Software, 2008,25(10): 78-81. | |
[46] | LECUN Y , BOTTOU L , BENGIO Y ,et al. Gradient based learning applied to document recognition[J]. Proceedings of the IEEE, 1998,86(11): 2278-2324. |
[47] | XU Q , HUANG G , YUAN Y ,et al. An empirical study on evaluation metrics of generative adversarial networks[J]. arXiv Preprint,arXiv:1806.07755v2, 2018 |
[1] | 张佳乐, 朱诚诚, 孙小兵, 陈兵. 基于GAN的联邦学习成员推理攻击与防御方法[J]. 通信学报, 2023, 44(5): 193-205. |
[2] | 苏新, 张桂福, 行鸿彦, Zenghui Wang. 基于平衡生成对抗网络的海洋气象传感网入侵检测研究[J]. 通信学报, 2023, 44(4): 124-136. |
[3] | 刘延华, 李嘉琪, 欧振贵, 高晓玲, 刘西蒙, MENG Weizhi, 刘宝旭. 对抗训练驱动的恶意代码检测增强方法[J]. 通信学报, 2022, 43(9): 169-180. |
[4] | 王延文, 雷为民, 张伟, 孟欢, 陈新怡, 叶文慧, 景庆阳. 基于生成模型的视频图像重建方法综述[J]. 通信学报, 2022, 43(9): 194-208. |
[5] | 李昂, 陈建新, 魏昕, 周亮. 面向6G的跨模态信号重建技术[J]. 通信学报, 2022, 43(6): 28-40. |
[6] | 段雪源, 付钰, 王坤. 基于VAE-WGAN的多维时间序列异常检测方法[J]. 通信学报, 2022, 43(3): 1-13. |
[7] | 向夏雨, 王佳慧, 王子睿, 段少明, 潘鹤中, 庄荣飞, 韩培义, 刘川意. 基于生成对抗网络技术的医疗仿真数据生成方法[J]. 通信学报, 2022, 43(3): 211-224. |
[8] | 陆彦辉, 柳寒, 李航, 朱光旭. 基于多鉴别器生成对抗网络的时间序列生成模型[J]. 通信学报, 2022, 43(10): 167-176. |
[9] | 刘威, 陈成, 江锐, 卢涛. 四通道无监督学习图像去雾网络[J]. 通信学报, 2022, 43(10): 210-222. |
[10] | 周志立, 王美民, 杨高波, 朱剑宇, 孙星明. 基于轮廓自动生成的构造式图像隐写方法[J]. 通信学报, 2021, 42(9): 144-154. |
[11] | 邹福泰, 谭越, 王林, 蒋永康. 基于生成对抗网络的僵尸网络检测[J]. 通信学报, 2021, 42(7): 95-106. |
[12] | 王洪雁, 杨晓, 姜艳超, 汪祖民. 基于多通道GAN的图像去噪算法[J]. 通信学报, 2021, 42(3): 229-237. |
[13] | 何遵文, 侯帅, 张万成, 张焱. 通信特定辐射源识别的多特征融合分类方法[J]. 通信学报, 2021, 42(2): 103-112. |
[14] | 朱晓荣,张佩佩. 基于GAN的异构无线网络故障检测与诊断算法[J]. 通信学报, 2020, 41(8): 110-119. |
[15] | 李琳辉,周彬,连静,周雅夫. 基于社会注意力机制的行人轨迹预测方法研究[J]. 通信学报, 2020, 41(6): 175-183. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||
|