[1] |
HE K , ZHANG X , REN S ,et al. Deep residual learning for image recognition[C]// 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR),June 26-July 1,2016,Las Vegas,USA. Piscataway:IEEE Press, 2016: 770-778.
|
[2] |
ABADI M , BARHAM P , CHEN J ,et al. TensorFlow:a system for large-scale machine learning[C]// The 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’16),November 2-4,2016,Savannah,USA. Berkeley:USENIX Association, 2016: 265-284.
|
[3] |
JOUPPI N P , YOUNG C , PATIL N ,et al. In-datacenter performance analysis of a tensor processing unit[C]// The 44th Annual International Symposium on Computer Architecture,June 24-28,2017,Toronto,Canada. New York:ACM Press, 2017: 1-12.
|
[4] |
ZHANG M , ZHOU Z . ML-KNN :a lazy learning approach to multi-label learning[J]. Pattern Recognition, 2007,40(7): 2038-2048.
|
[5] |
YANG S L , LI Y S , HU X X ,et al. Optimization study on k value of kmeans algorithm[J]. Systems EngineeringTheory & Practice, 2006,26(2): 97-101.
|
[6] |
MADIGAN D , YORK J . Bayesian graphical models for discrete data[J]. International Statistical Review, 1995,63(2): 215-232.
|
[7] |
OSUNA E , FREUND R , GIROSI F . Training svm:an application to face detection[R]. 1997.
|
[8] |
LECUN Y , KAVUKCUOGLU K , FARABET C . Convolutional networks and applications in vision[C]// International Symposium on Circuits and Systems,May 30-June 2,2010,Paris,France. Piscataway:IEEE Press, 2010: 253-256.
|
[9] |
KRIZHEVSKY A , SUTSKEVER I , HINTON G E . ImageNet classification with deep convolutional neural networks[C]// The 25th International Conference on Neural Information Processing Systems,December 3-6,2012,Lake Tahoe,Nevada.New York:Curran Associates Inc. , 2012: 1-9.
|
[10] |
SIMONYAN K , ZISSERMAN A . Very deep convolutional networks for large-scale image recognition[J]. Computer Science, 2014:arXiv:1409.1556.
|
[11] |
CHEN T , LI M , LI Y ,et al. MXNet :a flexible and efficient machine learning library for heterogeneous distributed systems[J]. Statistics, 2015:arXiv:1512.01274.
|
[12] |
TEAM T D , ALRFOU R , ALAIN G ,et al. Theano:a Python framework for fast computation of mathematical expressions[J]. Computer Science, 2016:arXiv:1605.02688.
|
[13] |
COLLOBERT R , KAVUKCUOGLU K , FARABET C . Torch7:a matlab-like environment for machine learning[C]// The 25th Annual Conference on Neural Information Processing Systems,December 12-14,2011,Granada,Spain.[S.l.:s.n]. 2011: 1-6.
|
[14] |
JIA Y , SHELHAMER E , DONAHUE J ,et al. Caffe:convolutional architecture for fast feature embedding[J]. Computer Science, 2014:arXiv:1408.5093.
|
[15] |
CHEN T , DU Z , SUN N ,et al. DianNao:a small-footprint high-throughput accelerator for ubiquitous machinelearning[C]// The 19th International Conference on Architectural Support for Programming Languages and Operating Systems,March 1-5,2014,Salt Lake City,USA. New York:ACM Press, 2014: 269-284.
|
[16] |
CHEN Y , LUO T , LIU S ,et al. DaDianNao:a machine-learning supercomputer[C]// The 47th Annual IEEE/ACM International Symposium on Microarchitecture,December 13-17,2014,Cambridge,UK. Washington DC:IEEE Computer Society, 2014: 609-622.
|
[17] |
DU Z , FASTHUBER R , CHEN T ,et al. ShiDianNao[C]// The 42nd Annual International Symposium on Computer Architecture,June 13-17,2015,Portland,USA.[S.l.:s.n]. 2015: 92-104.
|
[18] |
LIU D , CHEN T , LIU S ,et al. PuDianNao:a polyvalent machine learning accelerator[C]// The 20th International Conference on Architectural Support for Programming Languages and Operating Systems,March 14-18,2015,Istanbul,Turkey. New York:ACM Press, 2015: 369-381.
|
[19] |
ZHANG S , DU Z , ZHANG L ,et al. Cambricon-X:an accelerator for sparse neural networks[C]// The 49th Annual IEEE/ACM International Symposium on Microarchitecture,October 15-19,2016,Taipei,China. Piscataway:IEEE Press, 2016: 1-12.
|
[20] |
LIU S , DU Z , TAO J ,et al. Cambricon:an instruction set architecture for neural networks[C]// The 43rd International Symposium on Computer Architecture,June 18-22,2016,Seoul,Korea. New York:ACM Press, 2016: 393-405.
|
[21] |
WEI R , SCHWARTZ L , ADVE V . DLVM:a modern compiler infrastructure for deep learning systems[J]. Computer Science, 2017:arXiv:1711.03016.
|
[22] |
CHEN T , MOREAU T , JIANG Z ,et al. TVM:end-to-end optimization stack for deep learning[J]. Computer Science, 2018:arXiv:1802.04799v1.
|