Telecommunications Science ›› 2022, Vol. 38 ›› Issue (1): 83-94.doi: 10.11959/j.issn.1000-0801.2022004

• Research and Development • Previous Articles     Next Articles

A flexible pruning on deep convolutional neural networks

Liang CHEN1,2, Yaguan QIAN1,2, Zhiqiang HE1,2, Xiaohui GUAN3, Bin WANG4, Xing WANG4   

  1. 1 School of Science/School of Big-data Science, Zhejiang University of Science and Technology, Hangzhou 310023, China
    2 Hikvision-Zhejiang University of Science and Technology Edge Intelligence Security Lab, Hangzhou 310023, China
    3 College of Information Engineering & Art Design, Zhejiang University of Water Resources and Electric Power, Hangzhou 310023, China
    4 College of Electrical Engineering, Zhejiang University, Hangzhou 310063, China
  • Revised:2021-12-13 Online:2022-01-20 Published:2022-01-01
  • Supported by:
    The National Key Research and Development Program of China(2018YFB2100400);The National Natural Science Foundation of China(61902082)

Abstract:

Despite the successful application of deep convolutional neural networks, due to the redundancy of its structure, the large memory requirements and the high computing cost lead it hard to be well deployed to the edge devices with limited resources.Network pruning is an effective way to eliminate network redundancy.An efficient flexible pruning strategy was proposed in the purpose of the best architecture under the limited resources.The contribution of channels was calculated considering the distribution of channel scaling factors.Estimating the pruning result and simulating in advance increase efficiency.Experimental results based on VGG16 and ResNet56 on CIFAR-10 show that the flexible pruning reduces FLOPs by 71.3% and 54.3%, respectively, while accuracy by only 0.15 percentage points and 0.20 percentage points compared to the benchmark model.

Key words: convolutional neural network, network pruning, scaling factor, channel contribution

CLC Number: 

No Suggested Reading articles found!