通信学报 ›› 2022, Vol. 43 ›› Issue (2): 143-155.doi: 10.11959/j.issn.1000-436x.2022031

• 学术论文 • 上一篇    下一篇

基于轻量级全连接网络的H.266/VVC分量间预测

霍俊彦1, 王丹妮1, 马彦卓1, 万帅2, 杨付正1   

  1. 1 西安电子科技大学ISN国家重点实验室,陕西 西安 710071
    2 西北工业大学电子信息学院,陕西 西安 710072
  • 修回日期:2022-01-24 出版日期:2022-02-25 发布日期:2022-02-01
  • 作者简介:霍俊彦(1982-),女,山西晋中人,博士,西安电子科技大学副教授,主要研究方向为多媒体通信、视频编码、智能信息处理
    王丹妮(1996-),女,陕西西安人,西安电子科技大学硕士生,主要研究方向为视频压缩编码
    马彦卓(1980-),女,河北深州人,博士,西安电子科技大学副教授,主要研究方向为视频编码与视频传输
    万帅(1979-),女,河南洛阳人,博士,西北工业大学教授、博士生导师,主要研究方向为视频编码、点云压缩及多媒体通信
    杨付正(1977-),男,山东德州人,博士,西安电子科技大学教授、博士生导师,主要研究方向为新一代视频压缩标准、基于深度学习的视频处理和虚拟现实
  • 基金资助:
    国家自然科学基金资助项目(62101409);国家自然科学基金资助项目(62171353)

Efficient cross-component prediction for H.266/VVC based on lightweight fully connected networks

Junyan HUO1, Danni WANG1, Yanzhuo MA1, Shuai WAN2, Fuzheng YANG1   

  1. 1 State Key Laboratory of Integrated Services Network, Xidian University, Xi’an 710071, China
    2 School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China
  • Revised:2022-01-24 Online:2022-02-25 Published:2022-02-01
  • Supported by:
    The National Natural Science Foundation of China(62101409);The National Natural Science Foundation of China(62171353)

摘要:

新一代视频编码标准H.266/VVC引入分量间线性模型(CCLM)预测提高压缩效率。针对亮度色度分量存在相关性却难以建模的问题,提出基于神经网络的分量间预测算法。该算法根据待预测像素与参考像素的亮度差遴选出相关性强的参考像素构成参考子集,然后将参考子集送入轻量级全连接网络获得色度预测值。实验结果表明,与 H.266/VVC 测试模型版本 10.0(VTM10.0)相比,所提算法可提高色度预测准确度,在 Y、Cb 和 Cr上可分别节省0.27%、1.54%和1.84%的码率。所提算法具有不同块尺寸和编码参数均可使用统一网络结构的优点。

关键词: H.266/VVC, 色度帧内预测, 分量间预测, 神经网络

Abstract:

Cross-component linear model (CCLM) prediction in H.266/versatile video coding (VVC) can improve the compression efficiency.There exists high correlation between luma and chroma components while the correlation is difficult to be modeled explicitly.An algorithm for neural network based cross-component prediction (NNCCP) was proposed where reference pixels with high correlation were selected according to the luma difference between the reference pixels and the pixel to be predicted.Based on the high-correlated reference pixels and the luma difference, the predicted chroma was obtained based on lightweight fully connected networks.Experimental results demonstrate that the proposed algorithm can achieve 0.27%, 1.54%, and 1.84% bitrate savings for luma and chroma components, compared with the VVC test model 10.0 (VTM10.0).Besides, a unified network can be employed to blocks with different sizes and different quantization parameters.

Key words: H.266/VVC, chroma intra prediction, cross component prediction, neural network

中图分类号: 

No Suggested Reading articles found!