通信学报 ›› 2020, Vol. 41 ›› Issue (10): 70-79.doi: 10.11959/j.issn.1000-436x.2020181
修回日期:
2020-08-05
出版日期:
2020-10-25
发布日期:
2020-11-05
作者简介:
李艳凤(1988- ),女,河北廊坊人,博士,北京交通大学副教授、博士生导师,主要研究方向为图像处理与模式识别|张斌(1995- ),男,山东德州人,北京交通大学硕士生,主要研究方向为图像处理与模式识别|孙嘉(1995- ),女,辽宁丹东人,北京交通大学博士生,主要研究方向为图像处理与模式识别|陈后金(1965- ),男,安徽马鞍山人,博士,北京交通大学教授、博士生导师,主要研究方向为图像处理与模式识别|朱锦雷(1983- ),男,山东曹县人,北京交通大学博士生,主要研究方向为图像处理与模式识别
基金资助:
Yanfeng LI,Bin ZHANG,Jia SUN,Houjin CHEN(),Jinlei ZHU
Revised:
2020-08-05
Online:
2020-10-25
Published:
2020-11-05
Supported by:
摘要:
现有跨数据集行人再识别方法一般致力于减小2个数据集之间的数据分布差异,忽略了背景信息对识别性能的影响。针对上述问题,提出了一种基于多池化融合与背景消除网络的跨数据集行人再识别方法。为了兼顾全局特征和局部特征,同时实现特征的多细粒度表示,构建了多池化融合网络。为了使监督网络能提取有用的行人前景特征,构建了特征级有监督背景消除网络。采用结合行人分类损失及特征激活损失的多任务学习损失函数,在3个公开行人再识别数据集上对方法进行评估,当MSMT17作为训练集时,Market-1501上的跨数据集识别性能mAP为35.53%,相比ResNet50网络提升了9.24%;DukeMTMC-reID上的跨数据集识别性能mAP为41.45%,相比于ResNet50网络提升了10.72%。与现有方法相比,所提方法具有更优的跨数据集行人再识别性能。
中图分类号:
李艳凤,张斌,孙嘉,陈后金,朱锦雷. 基于多池化融合与背景消除网络的跨数据集行人再识别方法[J]. 通信学报, 2020, 41(10): 70-79.
Yanfeng LI,Bin ZHANG,Jia SUN,Houjin CHEN,Jinlei ZHU. Cross-dataset person re-identification method based on multi-pool fusion and background elimination network[J]. Journal on Communications, 2020, 41(10): 70-79.
表1
MPF网络消融实验"
数据集 | 网络 | Rank-1 | Rank-5 | Rank-10 | mAP |
ResNet50 | 82.40% | 93.15% | 95.50% | 60.76% | |
Market-1501 | G-L | 87.97% | 95.63% | 97.18% | 68.18% |
MPF | 91.90% | 97.01% | 98.25% | 76.35% | |
ResNet50 | 74.66% | 85.92% | 89.79% | 54.23% | |
DukeMTMC-reID | G-L | 78.90% | 88.78% | 91.52% | 61.10% |
MPF | 83.94% | 92.11% | 94.21% | 68.29% | |
ResNet50 | 60.85% | 74.85% | 80.68% | 30.67% | |
MSMT17 | G-L | 66.00% | 78.97% | 83.38% | 34.43% |
MPF | 72.07% | 83.07% | 86.50% | 40.70% |
表6
Market-1501→DukeMTMC-reID的跨数据集结果对比"
网络 | Rank-1 | Rank-5 | Rank-10 | mAP |
TJ-AIDL[ | 44.3% | 59.6% | 65.0% | 23.0% |
MMFA[ | 45.3% | 59.8% | 66.3% | 24.7% |
HHL[ | 46.9% | 61.0% | 66.7% | 27.2% |
PT-GAN[ | 27.4% | — | 50.7% | — |
SP-GAN[ | 41.1% | 56.6% | 63.0% | 22.3% |
ATNet[ | 45.1% | 59.5% | 64.2% | 24.9% |
MPF+背景消除 | 55.57% | 68.81% | 74.06% | 30.73% |
表7
DukeMTMC-reID→Market-1501的跨数据集结果对比"
网络 | Rank-1 | Rank-5 | Rank-10 | mAP |
TJ-AIDL[ | 58.2% | 74.8% | 81.1% | 26.5% |
MMFA[ | 56.7% | 75.0% | 81.8% | 27.4% |
HHL[ | 62.2% | 78.8% | 84.0% | 31.4% |
PT-GAN[ | 38.6% | — | 66.1% | — |
SP-GAN[ | 51.5% | 70.1% | 76.8% | 22.8% |
ATNet[ | 55.7% | 73.2% | 79.4% | 25.6% |
MPF+背景消除 | 62.48% | 78.12% | 84.27% | 30.72% |
[1] | CHEN K , CHEN Y , HAN C ,et al. Hard sample mining makes person re-identification more efficient and accurate[J]. Nerocomputing, 2020(382): 259-267. |
[2] | SERBETCI A , AKGUL Y S . End-to-end training of CNN ensembles for person re-identification,Pattern Recognition[J]. Pattern Recognition, 2020,104:107319. |
[3] | 李幼蛟, 卓力, 张菁 ,等. 行人再识别技术综述[J]. 自动化学报, 2018,44(9): 1554-1568. |
LI Y J , ZHUO L , ZHANG J ,et al. Robust resource allocation algorithm for heterogeneous wireless network with SWIPT[J]. ACTA Automatica Sinica, 2018,44(9): 1554-1568. | |
[4] | GOU M , ZHANG X , RATES-BORRAS A . Person re-identification in appearance impaired scenarios[C]// Proceedings of British Machine Vision Conference. Saarland:DBLP, 2016: 1-14. |
[5] | MATSUKAWA T , OKABE T , SUZUKI E . Hierarchical gaussian descriptor for person re-identification[C]// Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2016: 1363-1372. |
[6] | KOESTINGER M , HIRZER M , WOHLHART P . Large scale metric learning from equivalence constraints[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2012: 2288-2295. |
[7] | ZHENG W S , GONG S , XIANG T . Re-identification by relative distance comparison[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013,35(3): 653-668. |
[8] | YI D , LEI Z , LIAO S ,et al. Deep metric learning for person re-identification[C]// International Conference on Pattern Recognition. Stockholm:Institute of Electrical and Electronics Engineers Incorporated, 2014: 34-39. |
[9] | YAO H , ZHANG S , ZHANG Y ,et al. Deep representation learning with part loss for person re-identification[J]. IEEE Transactions on Image Processing, 2019,28(6): 2860-2871. |
[10] | SUN Y , ZHENG L , YANG Y ,et al. Beyond part models:person retrieval with refined part pooling (and a strong convolutional baseline)[C]// Proceedings of the European Conference on Computer Vision. Berlin:Springer, 2018: 501-508. |
[11] | FU Y , WEI Y , ZHOU Y ,et al. Horizontal pyramid matching for person re-identification[J]. arXiv Preprint,arXiv:1804.05275, 2018 |
[12] | QI L , WANG L , HUO J ,et al. A novel unsupervised camera-aware domain adaptation framework for person re-identification[C]// Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE Press, 2019: 8079-8088. |
[13] | LI Y , LIN C , LIN Y ,et al. Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation[C]// Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE Press, 2019: 7918-7928. |
[14] | HUANG H , YANG W , CHEN X ,et al. EANet:enhancing alignment for cross-domain person re-identification[J]. arXiv Preprint,arXiv:1812.11369, 2018 |
[15] | WANG J , ZHU X , GONG S ,et al. Transferable joint attribute-identity deep learning for unsupervised person re-identification[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018: 2275-2284. |
[16] | LIN S , LI H , LI C ,et al. Multi-task mid-level feature alignment network for unsupervised cross-dataset person re-identification[J]. arXiv Preprint,arXiv:1807.01440, 2018 |
[17] | ZHONG Z , ZHENG L , LI S ,et al. Generalizing a person retrieval model hetero-and homogeneously[C]// Proceedings of the European Conference on Computer Vision. Berlin:Springer, 2018: 176-192. |
[18] | WEI L , ZHANG S , GAO W ,et al. Person transfer gan to bridge domain gap for person re-identification[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018: 79-88. |
[19] | DENG W , ZHENG L , YE Q ,et al. Image-image domain adaptation with preserved self-similarity and domain-dissimilarity for person re-identification[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018: 994-1003. |
[20] | LIU J , ZHA Z , CHEN D ,et al. Adaptive transfer network for cross-domain person re-identification[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2019: 7202-7211. |
[21] | TIAN M , YI S , LI H ,et al. Eliminating background-bias for robust person re-identification[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2018: 5794-5803. |
[22] | HE K , ZHANG X , REN S . Deep residual learning for image recognition[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2016: 770-778. |
[23] | LONG J , SHELHAMER E , DARRELL T ,et al. Fully convolutional networks for semantic segmentation[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2015: 3431-3440. |
[24] | LUO Y , ZHENG Z , ZHENG L ,et al. Macro-micro adversarial network for human parsing[C]// Proceedings of the European Conference on Computer Vision. Berlin:Springer, 2018: 424-440. |
[25] | GONG K , LIANG X , ZHANG D ,et al. Look into person:self-supervised structure-sensitive learning and a new benchmark for human parsing[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2017: 6757-6765. |
[26] | ZHENG L , SHEN L , TIAN L . Scalable person re-identification:a benchmark[C]// Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Piscataway:IEEE Press, 2015: 1116-1124. |
[27] | ZHENG Z , ZHENG L , YANG Y . Unlabeled samples generated by GAN improve the person re-identification baseline in vitro[C]// Proceedings of the IEEE International Conference on Computer Vision. Piscataway:IEEE Press, 2017: 3774-3782. |
[1] | 陈东昱, 陈华, 范丽敏, 付一方, 王舰. 基于深度学习的随机性检验策略研究[J]. 通信学报, 2023, 44(6): 23-33. |
[2] | 李荣鹏, 汪丙炎, 张宏纲, 赵志峰. 知识增强的语义通信接收端设计[J]. 通信学报, 2023, 44(6): 70-76. |
[3] | 马帅, 裴科, 祁华艳, 李航, 曹雯, 王洪梅, 熊海良, 李世银. 基于生成模型的地磁室内高精度定位算法研究[J]. 通信学报, 2023, 44(6): 211-222. |
[4] | 杨洁, 董标, 付雪, 王禹, 桂冠. 基于轻量化分布式学习的自动调制分类方法[J]. 通信学报, 2022, 43(7): 134-142. |
[5] | 杨秀璋, 彭国军, 李子川, 吕杨琦, 刘思德, 李晨光. 基于Bert和BiLSTM-CRF的APT攻击实体识别及对齐研究[J]. 通信学报, 2022, 43(6): 58-70. |
[6] | 廖勇, 王世义. 高速移动环境下基于RM-Net的大规模MIMO CSI反馈算法[J]. 通信学报, 2022, 43(5): 166-176. |
[7] | 廖育荣, 王海宁, 林存宝, 李阳, 方宇强, 倪淑燕. 基于深度学习的光学遥感图像目标检测研究进展[J]. 通信学报, 2022, 43(5): 190-203. |
[8] | 赵增华, 童跃凡, 崔佳洋. 基于域自适应的Wi-Fi指纹设备无关室内定位模型[J]. 通信学报, 2022, 43(4): 143-153. |
[9] | 廖勇, 程港, 李玉杰. 基于深度展开的大规模MIMO系统CSI反馈算法[J]. 通信学报, 2022, 43(12): 77-88. |
[10] | 段雪源, 付钰, 王坤, 李彬. 基于简单统计特征的LDoS攻击检测方法[J]. 通信学报, 2022, 43(11): 53-64. |
[11] | 霍俊彦, 邱瑞鹏, 马彦卓, 杨付正. 基于最邻近帧质量增强的视频编码参考帧列表优化算法[J]. 通信学报, 2022, 43(11): 136-147. |
[12] | 康海燕, 冀源蕊. 基于本地化差分隐私的联邦学习方法研究[J]. 通信学报, 2022, 43(10): 94-105. |
[13] | 张红霞, 王琪, 王登岳, 王奔. 基于深度学习的区块链蜜罐陷阱合约检测[J]. 通信学报, 2022, 43(1): 194-202. |
[14] | 晏燕, 丛一鸣, Adnan Mahmood, 盛权政. 基于深度学习的位置大数据统计发布与隐私保护方法[J]. 通信学报, 2022, 43(1): 203-216. |
[15] | 朱叶, 余宜林, 郭迎春. HRDA-Net:面向真实场景的图像多篡改检测与定位算法[J]. 通信学报, 2022, 43(1): 217-226. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||
|