[1] |
TANG X Y , CAO C , WANG Y X ,et al. Computing power network:the architecture of convergence of computing and networking towards 6G requirement[J]. China Communications, 2021,18(2): 175-185.
|
[2] |
雷波, 赵倩颖, 赵慧玲 . 边缘计算与算力网络综述[J]. 中兴通讯技术, 2021,27(3): 3-6.
|
|
LEI B , ZHAO Q Y , ZHAO H L . Overview of edge computing and computing power network[J]. ZTE Technology Journal, 2021,27(3): 3-6.
|
[3] |
雷波, 刘增义, 王旭亮 ,等. 基于云、网、边融合的边缘计算新方案:算力网络[J]. 电信科学, 2019,35(9): 44-51.
|
|
LEI B , LIU Z Y , WANG X L ,et al. Computing network:a new multi-access edge computing[J]. Telecommunications Science, 2019,35(9): 44-51.
|
[4] |
李建飞, 曹畅, 李奥 ,等. 算力网络中面向业务体验的算力建模[J]. 中兴通讯技术, 2020,26(5): 34-38,52.
|
|
LI J F , CAO C , LI A ,et al. Computing power modeling for business experience in computing power network[J]. ZTE Technology Journal, 2020,26(5): 34-38,52.
|
[5] |
何涛, 杨振东, 曹畅 ,等. 算力网络发展中的若干关键技术问题分析[J]. 电信科学, 2022,38(6): 62-70.
|
|
HE T , YANG Z D , CAO C ,et al. Analysis of some key technical problems in the development of computing power network[J]. Telecommunications Science, 2022,38(6): 62-70.
|
[6] |
KHAN W Z , AHMED E , HAKAK S ,et al. Edge computing:a survey[J]. Future Generation Computer Systems, 2019,97(C): 219-235.
|
[7] |
MAO Y Y , ZHANG J , SONG S H ,et al. Stochastic joint radio and computational resource management for multi-user mobile-edge computing systems[J]. IEEE Transactions on Wireless Communications, 2017,16(9): 5994-6009.
|
[8] |
MOUSAVI S S , SCHUKAT M , HOWLEY E . Deep reinforcement learning:an overview[C]// Proceedings of SAI Intelligent Systems Conference (IntelliSys). Heidelberg:Springer, 2016: 426-440.
|
[9] |
LI Y , ZHANG X , ZENG T ,et al. Task placement and resource allocation for edge machine learning:a GNN-based multi-agent reinforcement learning paradigm[J]. arXiv preprint, 2023,arXiv:2302.00571.
|
[10] |
ALE L H , ZHANG N , FANG X J ,et al. Delay-aware and energy-efficient computation offloading in mobile-edge computing using deep reinforcement learning[J]. IEEE Transactions on Cognitive Communications and Networking, 2021,7(3): 881-892.
|
[11] |
LI M S , GAO J , ZHAO L ,et al. Deep reinforcement learning for collaborative edge computing in vehicular networks[J]. IEEE Transactions on Cognitive Communications and Networking, 2020,6(4): 1122-1135.
|
[12] |
YANG A , WU M , CHENG B ,et al. Reinforcement learning in computing and network convergence orchestration[J]. arXiv preprint, 2022,arXiv:2209.10753.
|
[13] |
JAIN T , AVANEESH , VERMA R ,et al. Latency-memory optimized splitting of convolution neural networks for resource constrained edge devices[C]// Proceedings of 2022 14th International Conference on Communication Systems & Networks(COMSNETS). Piscataway:IEEE Press, 2022: 531-539.
|
[14] |
TESSLER C , MANKOWITZ D J , MANNOR S . Reward constrained policy optimization[J]. arXiv preprint, 2018,arXiv:1805.11074.
|
[15] |
ZHUANG S , GAO C X , HE Y ,et al. QC-DQN:a novel constrained reinforcement learning method for computation offloading in multi-access edge computing[C]// Proceedings of 2022 International Joint Conference on Neural Networks (IJCNN). Piscataway:IEEE Press, 2022: 1-8.
|
[16] |
BHATNAGAR S , LAKSHMANAN K . An online actor-critic algorithm with function approximation for constrained Markov decision processes[J]. Journal of Optimization Theory and Applications, 2012,153(3): 688-708.
|
[17] |
ACHIAM J , HELD D , TAMAR A ,et al. Constrained policy optimization[J]. arXiv preprint, 2017,arXiv:1705.10528.
|