电信科学 ›› 2023, Vol. 39 ›› Issue (1): 1-19.doi: 10.11959/j.issn.1000-0801.2023019
• 综述 • 下一篇
康宇1,2,3, 刘雅琼1,2,3, 赵彤雨1,2,3, 寿国础1,2,3
修回日期:
2022-12-20
出版日期:
2023-01-20
发布日期:
2023-01-01
作者简介:
康宇(1999- ),男,北京邮电大学硕士生,主要研究方向为边缘计算、车联网和边缘智能基金资助:
Yu KANG1,2,3, Yaqiong LIU1,2,3, Tongyu ZHAO1,2,3, Guochu SHOU1,2,3
Revised:
2022-12-20
Online:
2023-01-20
Published:
2023-01-01
Supported by:
摘要:
在5G时代,车联网的通信和计算发展受到信息量急速增加的限制。将AI算法应用在车联网,可以实现车联网通信和计算方面的新突破。调研了AI算法在通信安全、通信资源分配、计算资源分配、任务卸载决策、服务器部署、通算融合等方面的应用,分析了目前AI算法在不同场景下所取得的成果和存在的不足,结合车联网发展趋势,讨论了AI算法在车联网应用中的未来研究方向。
中图分类号:
康宇, 刘雅琼, 赵彤雨, 寿国础. AI算法在车联网通信与计算中的应用综述[J]. 电信科学, 2023, 39(1): 1-19.
Yu KANG, Yaqiong LIU, Tongyu ZHAO, Guochu SHOU. A survey on AI algorithms applied in communication and computation in Internet of vehicles[J]. Telecommunications Science, 2023, 39(1): 1-19.
表1
AI算法在通信安全方面的应用"
文献 | AI算法 | 优化内容 | 贡献 |
文献[ | Q-learning | 密钥更新时延通信加/解密计算时延 | 提出了一种基于 Q-learning 的车联网群密钥分配管理技术,提升了群集的通信安全等级,降低了通信时延 |
文献[ | 复杂环境中保证高QoS | 提出了一种基于 Q-learning 的网格路由协议,可以在可靠性和端到端时延方面提供较高的QoS性能 | |
文献[ | DQN | 高QoS | 提出了一种基于DQN的新的考虑车辆速度影响的动态业务迁移方案,可以提高QoS,实现更高的系统利用率 |
文献[ | 提高 V2V 安全包广播的性能 | 提出了一种基于竞争信息状态表示的自适应 MAC 层算法,并结合 DQN算法,提高了V2V安全包广播的性能 | |
文献[ | 最小化接收信息时延 | 使用 DQN 算法,使 RSU 可以立即执行最优调度决策,建立一个达到可接受的QoS水平的绿色和安全的车辆网络 | |
文献[ | 车联网中的网络负载、通信可靠性 | 提出了一种基于DQN的协同感知方案,减轻了车载网络中的网络负载,提高了通信可靠性 | |
文献[ | CNN-LSTM | 入侵检测的准确性和低时延 | 提出Spark框架的CNN-LSTM的深度学习算法,可以很好地满足入侵检测对准确性和实时性的要求 |
文献[ | 反卷积网络 | 车路协同推断的隐私问题 | 提出 3 种基于差分隐私的防御算法,通过理论计算和实验证明所提算法可有效防御黑盒图像还原攻击,同时保持车路协同推断的精确度 |
文献[ | MAPPO | 区块验证的安全性与可靠性 | 将智能汽车和验证者之间的交互过程建模成斯坦伯格博弈模型,利用MAPPO算法求解,保证了区块验证安全性与可靠性 |
文献[ | T-DDRL | 车载自组网技术中最可信路由路径的选择 | 提出了一种T-DDRL方法,深度神经网络利用SDN控制器作为智能体可以学习最可信的路由路径 |
文献[ | MARDPG-AG | 车载网络的安全性 | 提出一种鲁棒的MARDPG-AG算法,增强了车载网络的安全性 |
表2
AI算法在计算资源分配方面的应用"
文献 | AI算法 | 优化内容 | 贡献 |
文献[ | Q-learning | 时延 | 使用Q-learning方法,有效解决了使用移动边缘计算的车辆网络中的资源分配问题 |
文献[ | DQN | 总计算成本 | 提出了一种基于 DQN 的计算资源分配方案,以系统总计算成本最小为目标函数,可以满足资源分配的低开销和低时延等要求 |
文献[ | PPO | 时延 | 提出结合强化学习算法 PPO 的启发式算法,利用车辆的移动和停止状态做出更有效的资源配置决策 |
文献[ | PPO | 吞吐量、资源使用效率 | 提出了一种基于 PPO 的智能资源分配方法,该算法在提高区块链吞吐量和资源使用效率方面都有更良好的性能 |
文献[ | SMDP | 功耗、处理时间 | 提出了基于 SMDP 算法的车载云计算系统的最优计算资源分配方案,以实现任务卸载能力的提升 |
文献[ | 遗传算法 | 网络拥塞 | 提出了一种基于遗传算法的卸载策略,在计算任务时延约束下可以最小化云边通信流量的问题 |
文献[ | 遗传算法 | 时延和开销 | 提出了一种IoV环境下基于移动边缘计算的计算资源分配策略,对遗传算法进行改进,降低了时延和开销,提高了计算精度和遗传算法在研究问题中的适用性 |
文献[ | 斯塔克伯格博弈 | 买家和卖家效用 | 提出了一种基于IoV的智能城市按需计算资源交易管理系统,构建了一个两阶段的斯塔克伯格博弈来刺激买卖双方的计算资源交易过程,提高了系统的安全性,优化了交易双方的效用 |
表3
AI算法在服务器部署方面的应用"
文献 | AI算法 | 优化内容 | 贡献 |
文献[ | 聚类算法 | 系统效用 | 设计了一种基于线性规划的聚类算法解决每个 RSU 的不规则覆盖区域的问题,以最大化系统效用 |
文献[ | NSGA-Ⅱ | 高QoS | 采用NSGA-II解决多云应用的部署问题,所提方案可以为多媒体边缘云环境中各种不同QoS要求的应用提供最优、高效的业务部署方案 |
文献[ | NSGA-Ⅲ | 时延负载均衡 | 设计了一种多目标进化算法NSGA-Ⅲ和Kuhn-Munkres加权二部图匹配算法的DEP方法,该算法在时延、负载均衡和重构成本有一定的效果 |
文献[ | 采用非支配排序遗传算法 NSGA-Ⅲ得到一组具有低时延、平衡工作量以及适当ES数量的ES布局,获得更高的QoS | ||
文献[ | DQN | 利用DQN算法获得实现边缘计算覆盖率、ES工作负载均衡、平均时延等多个目标的最优布置方案 | |
文献[ | DDPG | 任务计算时间 | 提出了一种基于DDPG的服务部署方案,可以减少任务计算时间 |
文献[ | 多智能体RL | 时延负载均衡 | 提出了一种多智能体 RL 解决方案来解决移动边缘服务器的布局问题,最大限度地减少网络时延并平衡边缘服务器上的负载 |
表4
AI算法在通信和计算资源联合分配方面的应用"
文献 | 算法 | 优化内容 | 计算资源 | 卸载类型 | 贡献 |
文献[ | Q-learning | 计算量 | 任务车辆 | 部分卸载 | 提出了基于Q-learning的通信与计算联合资源分配 |
时延 | MEC服务器 | 算法,缓解异构车联网架构下算量大、低时延车 | |||
载应用与车辆有限且不均的资源分布之间的矛盾 | |||||
文献[ | 时延 | 任务车辆 | 全部卸载 | 在移动边缘计算层采用Q-learning实现通信和计算 | |
可靠性 | MEC服务器 | 资源的分配的最优解,有效地降低了系统总成本 | |||
云服务器 | |||||
文献[ | DQN | 吞吐量 | 任务车辆 | 采用DQN算法优化波束宽度设计,使得在满足车 | |
MEC服务器 | 辆之间公平性的同时系统吞吐量最大 | ||||
文献[ | 服务成本 | 任务车辆 | 提出了一种基于DQN的VEC网络协作数据调度方 | ||
MEC服务器 | 案,该方法可以有效地优化数据调度,以最小化 | ||||
服务车辆 | 数据丢失 | ||||
文献[ | 无人机 | 提出了一种基于DQN的无人机临时辅助VECN信 | |||
道分配和任务卸载策略方案,无人机基于DQN算 | |||||
法选择最优任务处理策略 | |||||
文献[ | DDPG | 能耗 | 任务车辆 | 利用DDPG,在MEC服务器上设计了一种实时自 | |
卸载性能 | MEC节点 | 适应算法来分配计算资源和传输资源,提高了长 | |||
期平均任务成功率和发射功率 | |||||
文献[ | SAC、PPO | 服务成本 | 任务车辆 | 提出了DSDRL框架进行联合资源优化,可以满足 | |
MEC服务器 | 不同服务的不同需求,降低成本 | ||||
服务车辆 | |||||
文献[ | MADDPG | 时延 | MEC服务器 | 采用MADDPG对每个MEC服务器的决策方案进 | |
(在无人机上安装) | 行优化,使每个MEC服务器实时进行车辆关联和 | ||||
资源分配 |
[1] | 龚媛嘉, 孙海波 . 车联网系统综述[J]. 中国新通信, 2021,23(17): 51-52. |
GONG Y J , SUN H B . An overview of internet of vehicles systems[J]. China New Telecommunications, 2021,23(17): 51-52. | |
[2] | 陈山枝, 葛雨明, 时岩 . 蜂窝车联网(C-V2X)技术发展、应用及展望[J]. 电信科学, 2022,38(1): 1-12. |
CHEN S Z , GE Y M , SHI Y . Technology development,application and prospect of cellular vehicle-to-everything (C-V2X)[J]. Telecommunications Science, 2022,38(1): 1-12. | |
[3] | CHEN S Z , HU J L , SHI Y ,et al. A vision of C-V2X:technologies,field testing,and challenges with chinese development[J]. IEEE Internet of Things Journal, 2020,7(5): 3872-3881. |
[4] | LIU G , LI N , DENG J ,et al. The SOLIDS 6G mobile network architecture:driving forces,features,and functional topology[J]. Engineering, 2022,8(1): 42-59. |
[5] | 孙韶辉, 戴翠琴, 徐晖 ,等. 面向 6G 的星地融合一体化组网研究[J]. 重庆邮电大学学报(自然科学版), 2021,33(6): 891-901. |
SUN S H , DAI C Q , XU H ,et al. Survey on satellite-terrestrial integration networking towards 6G[J]. Journal of Chongqing University of Posts and Telecommunications (Natural Science edition), 2021,33(6): 891-901. | |
[6] | LIU G Y , HUANG Y H , LIN ,et al. Vision,requirements and network architecture of 6G mobile network beyond 2030[J]. IEEE China Communications, 2020,17(9): 92-104. |
[7] | LIU Y Q , PENG M G , SHOU G C ,et al. Toward edge intelligence:multiaccess edge computing for 5G and internet of things[J]. IEEE Internet of Things Journal, 2020,7(8): 6722-6747. |
[8] | 徐堂炜, 张海璐, 刘楚 ,等. 基于强化学习的低时延车联网群密钥分配管理技术[J]. 网络与信息安全学报, 2020,6(5): 119-125. |
XU T W , ZHANG H L , LIU C ,et al. Reinforcement learning based group key agreement scheme with reduced latency for VANET[J]. Chinese Journal of Network and Information Security, 2020,6(5): 119-125. | |
[9] | WANG D , ZHANG Q , LIU J ,et al. A novel QoS-awared grid routing protocol in the sensing layer of Internet of vehicles based on reinforcement learning[J]. IEEE Access, 2019(7): 185730-185739. |
[10] | PENG Y , LIU L , ZHOU Y ,et al. Deep reinforcement learning-based dynamic service migration in vehicular networks[C]// Proceedings of 2019 IEEE Global Communications Conference (GLOBECOM). Piscataway:IEEE Press, 2019: 1-6. |
[11] | CHOE C , CHOI J , AHN J ,et al. Multiple channel access using deep reinforcement learning for congested vehicular networks[C]// Proceedings of 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). Piscataway:IEEE Press, 2020: 1-6. |
[12] | ATALLAH R F , ASSI C M , KHABBAZ M J . Scheduling the operation of a connected vehicular network using deep reinforcement learning[J]. IEEE Transactions on Intelligent Transportation Systems, 2018,20(5): 1669-1682. |
[13] | AOKI S , HIGUCHI T , ALTINTAS O . Cooperative perception with deep reinforcement learning for connected vehicles[C]// Proceedings of 2020 IEEE Intelligent Vehicles Symposium (IV). Piscataway:IEEE Press, 2020: 328-334. |
[14] | 俞建业, 戚湧, 王宝茁 . 基于 Spark 的车联网分布式组合深度学习入侵检测方法[J]. 计算机科学, 2021,48(6A): 518-523. |
YU J Y , QI Y , WANG B Z . Distributed combination deep learning intrusion detection method for internet of vehicles based on Spark[J]. Computer Science, 2021,48(6A): 518-523. | |
[15] | 吴茂强, 黄旭民, 康嘉文 . 面向车路协同推断的差分隐私保护方法[J]. 计算机工程, 2022,48(7): 29-35. |
WU M Q , HUANG X M , KANG J W ,et al. Differential privacy protection methods for vehicle-road collaborative inference[J]. Computer Engineering, 2022,48(7): 29-35. | |
[16] | 李明磊, 章阳, 康嘉文 ,等. 基于多智能体强化学习的区块链赋能车联网中的安全数据共享[J]. 广东工业大学学报, 2021,38(6): 62-69. |
LI M L , ZHANG Y , KANG J W ,et al. Multi-agent reinforcement learning for secure data sharing in blockchain-empowered vehicular networks[J]. Journal of Guangdong University of Technology, 2021,38(6): 62-69. | |
[17] | ZHANG D , YU F R , YANG R ,et al. A deep reinforcement learning-based trust management scheme for software-defined vehicular networks[C]// Proceedings of the 8th ACM Symposium on Design and Analysis of Intelligent Vehicular Networks and Applications. New York:ACM Press, 2018: 1-7. |
[18] | YOON S , CHO J H , KIM D S ,et al. DESOLATER:deep reinforcement learning-based resource allocation and moving target defense deployment framework[J]. IEEE Access, 2021(9): 70700-70714. |
[19] | ZHOU Y , TANG F , KAWAMOTO Y ,et al. Reinforcement learning-based radio resource control in 5G vehicular network[J]. IEEE Wireless Communications Letters, 2019,9(5): 611-614. |
[20] | 陈九九, 冯春燕, 郭彩丽 ,等. 车联网中视频语义驱动的资源分配算法[J]. 通信学报, 2021,42(7): 1-11. |
CHEN J J , FENG C Y , GUO C L ,et al. Video semantics-driven resource allocation algorithm in internet of vehicles[J]. Journal of Communication, 2021,42(7): 1-11. | |
[21] | YE S , XU L , LI X . Vehicle-mounted self-organizing network routing algorithm based on deep reinforcement learning[J]. Wireless Communications and Mobile Computing, 2021(2021):9934585: 1-9. |
[22] | MLIKA Z , CHERKAOUI S . Network slicing with MEC and deep reinforcement learning for the internet of vehicles[J]. IEEE Network, 2021,35(3): 132-138. |
[23] | 王晓昌, 吴璠, 孙彦赞 ,等. 基于深度强化学习的车联网资源管理[J]. 工业控制计算机, 2021,34(9): 31-33,36. |
WANG X C , WU P , SUN Y Z ,et al. Internet of vehicles resource management based on deep reinforcement learning[J]. Industrial Personal Computer, 2021,34(9): 31-33,36. | |
[24] | 王晓昌, 吴璠, 孙彦赞 ,等. 基于联邦深度强化学习的车联网资源分配[J]. 电子测量技术, 2021,44(10): 114-120. |
WANG X C , WU P , SUN Y Z ,et al. Internet of vehicles resource management based on federal deep reinforcement learning[J]. Electronic Measurement Technology Journals, 2021,44(10): 114-120. | |
[25] | YE H , LI G Y . Deep reinforcement learning for resource allocation in V2V communications[C]// Proceedings of 2018 IEEE International Conference on Communications (ICC). Piscataway:IEEE Press, 2018: 1-6. |
[26] | GYAWALI S , QIAN Y , HU R . Resource allocation in vehicular communications using graph and deep reinforcement learning[C]// Proceedings of 2019 IEEE Global Communications Conference (GLOBECOM). Piscataway:IEEE Press, 2019: 1-6. |
[27] | CHEN X , WU C , CHEN T ,et al. Age of information aware radio resource management in vehicular networks:a proactive deep reinforcement learning perspective[J]. IEEE Transactions on Wireless Communications, 2020,19(4): 2268-2281. |
[28] | QIAO G , LENG S , MAHARIAN S ,et al. Deep reinforcement learning for cooperative content caching in vehicular edge computing and networks[J]. IEEE Internet of Things Journal, 2019,7(1): 247-257. |
[29] | ZHU M , LIU X Y , WANG X . Deep reinforcement learning for unmanned aerial vehicle-assisted vehicular networks[J]. arXiv Preprint, 2019,arXiv,1906.05015. |
[30] | 韩双双, 李卓珩, 杨林瑶 ,等. 基于强化学习和非正交多址接入的车联网无线资源分配[C]// 2019 中国自动化大会(CAC2019)论文集. [出版地不详:出版者不详], 2019: 360-365. |
HAN S S , LI Z X , YANG L Y . Wireless resource allocation in vehicular networks based on reinforcement learning and NOMA[C]// Proceedings of the China Automation Congress(CAC2019).[S.l.:s.n.], 2019: 360-365. | |
[31] | SHARIF A , LI J , SALEEM M A ,et al. A dynamic clustering technique based on deep reinforcement learning for internet of vehicles[J]. Journal of Intelligent Manufacturing, 2021,32(3): 757-768. |
[32] | 黄煜梵, 彭诺蘅, 林艳 ,等. 基于 SAC 强化学习的车联网频谱资源动态分配[J]. 计算机工程, 2021,47(9): 34-43. |
HUANG Y F , PENG N H , LIN Y ,et al. Dynamic spectrum resource allocation in internet of vehicles based on SAC reinforcement learning[J]. Computer Engineering, 2021,47(9): 34-43. | |
[33] | TIAN J , LIU Q , ZHANG H ,et al. Multi-agent deep reinforcement learning based resource allocation for heterogeneous QoS guarantees for vehicular Networks[J]. IEEE Internet of Things Journal, 2021,9(3): 1683-1695. |
[34] | 陈成瑞, 孙宁, 何世彪 ,等. 面向 C-V2X 通信的基于深度学习的联合信道估计与均衡算法[J]. 计算机应用, 2021,41(9): 2687-2693. |
CHEN C R , SUN N , HE S B ,et al. Deep learning-based joint channel estimation and equalization algorithm for C-V2X communications[J]. Journal of Computer Applications, 2021,41(9): 2687-2693. | |
[35] | 廖勇, 田肖懿, 蔡志镕 . 面向 C-V2I 的基于边缘计算的智能信道估计[J]. 电子学报, 2021,49(5): 833-842. |
LIAO Y , TIAN X Y , CAI Z R ,et al. Intelligent channel estimation based on edge computing for C-V2I[J]. Acta Electronica Sinica, 2021,49(5): 833-842. | |
[36] | ZHAO N , WU H , YU F R ,et al. Deep-reinforcement- learning-based latency minimization in edge intelligence over vehicular networks[J]. IEEE Internet of Things Journal, 2021,9(2): 1300-1312. |
[37] | 王汝言, 梁颖杰, 崔亚平 . 车辆网络多平台卸载智能资源分配算法[J]. 电子与信息学报, 2020,42(1): 263-270. |
WANG R Y , LIANG Y J , CUI Y P . Intelligent resource allocation algorithm for multi-platform offloading in vehicular networks[J]. Journal of Electronics & Information Technology, 2020,42(1): 263-270. | |
[38] | ZHANG Y , ZHANG M , FAN C ,et al. Computing resource allocation scheme of IoV using deep reinforcement learning in edge computing environment[J]. EURASIP Journal on Advances in Signal Processing, 2021,2021(1): 1-19. |
[39] | LEE S-S , LEE S . Resource allocation for vehicular fog computing using reinforcement learning combined with heuristic information[J]. IEEE Internet of Things Journal, 2020,7(10): 10450-10464. |
[40] | XIAO H , QIU C , YANG Q ,et al. Deep reinforcement learning for optimal resource allocation in blockchain-based IoV secure systems[C]// Proceedings of 2020 16th International Conference on Mobility,Sensing and Networking (MSN).[S.l.:s.n.], 2020: 137-144. |
[41] | 董晓丹, 吴琼 . 车载云计算系统中资源分配的优化方法[J]. 中国电子科学研究院学报, 2020,15(1): 92-98. |
DONG X D , WU Q . Optimization method of resource allocation in vehicular cloud computing system[J]. Journal of China Academy of Electronics and Information Technology, 2020,15(1): 92-98. | |
[42] | 李振江, 张幸林 . 减少核心网拥塞的边缘计算资源分配和卸载决策[J]. 计算机科学, 2021,48(3): 281-288. |
LI Z J , ZHANG X L . Resource allocation and offloading decision of edge computing for reducing core network congestion[J]. Computer Science, 2021,48(3): 281-288. | |
[43] | GAO D . Computing resource allocation strategy based on mobile edge computing in internet of vehicles environment[J]. Mobile Information Systems, 2022(2): 1-10. |
[44] | LIN X , WU J , MUMTAZ S ,et al. Blockchain-based on-demand computing resource trading in IoV-assisted smart city[J]. IEEE Transactions on Emerging Topics in Computing, 2020,9(3): 1373-1385. |
[45] | 张海波, 荆昆仑, 刘开健 ,等. 车联网中一种基于软件定义网络与移动边缘计算的卸载策略[J]. 电子与信息学报, 2020,42(3): 645-652. |
ZHANG H B , JING K L , LIU K J ,et al. An offloading mechanism based on software defined network and mobile edge computing in vehicular networks[J]. Journal of Electronics &Information Technology, 2020,42(3): 645-652. | |
[46] | 张海波, 刘香渝, 荆昆仑 ,等. 车联网中基于NOMA-MEC的卸载策略研究[J]. 电子与信息学报, 2021,43(4): 1072-1079. |
ZHANG H B , LIU X Y , JING K L ,et al. Research on NOMA-MEC-based offloading strategy in internet of vehicles[J]. Journal of Electronics & Information Technology, 2021,43(4): 1072-1079. | |
[47] | LI F , LIN Y , PENG N ,et al. Deep reinforcement learning based computing offloading for MEC-assisted heterogeneous vehicular networks[C]// Proceedings of 2020 IEEE 20th International Conference on Communication Technology (ICCT). Piscataway:IEEE Press, 2020: 927-932. |
[48] | 赵海涛, 张唐伟, 陈跃 ,等. 基于DQN的车载边缘网络任务分发卸载算法[J]. 通信学报, 2020,41(10): 172-178. |
ZHAO H T , ZHANG T W , CHEN Y ,et al. Task distribution offloading algorithm of vehicle edge network based on DQN[J]. Journal on Communications, 2020,41(10): 172-178. | |
[49] | LI M , GAO J , ZHAO L ,et al. Deep reinforcement learning for collaborative edge computing in vehicular networks[J]. IEEE Transactions on Cognitive Communications and Networking, 2020,6(4): 1122-1135. |
[50] | DAI Y , ZHANG K , MAHARJAN S ,et al. Edge intelligence for energy-efficient computation offloading and resource allocation in 5G beyond[J]. IEEE Transactions on Vehicular Technology, 2020,69(10): 12175-12186. |
[51] | ZHAN W , LUO C , WANG J ,et al. Deep-reinforcementlearning-based offloading scheduling for vehicular edge computing[J]. IEEE Internet of Things Journal, 2020,7(6): 5449-5465. |
[52] | LIU H , ZHAO H , GENG L ,et al. A policy gradient based offloading scheme with dependency guarantees for vehicular networks[C]// Proceedings of 2020 IEEE Globecom Workshops (GC Wkshps). Piscataway:IEEE Press, 2020: 1-6. |
[53] | WANG J , LV T , HUANG P ,et al. Mobility-aware partial computation offloading in vehicular networks:a deep reinforcement learning based scheme[J]. China Communications, 2020,17(10): 31-49. |
[54] | 许小龙, 方子介, 齐连永 ,等. 车联网边缘计算环境下基于深度强化学习的分布式服务卸载方法[J]. 计算机学报, 2021,44(12): 2382-2405. |
XU X L , FANG Z J , QI L Y ,et al. A deep reinforcement learning-based distributed service offloading method for edge computing empowered internet of vehicles[J]. Journal of Computer Science and Technology, 2021,44(12): 2382-2405. | |
[55] | TANG D , ZHANG X , LI M ,et al. Adaptive inference reinforcement learning for task offloading in vehicular edge computing systems[C]// Proceedings of 2020 IEEE International Conference on Communications Workshops (ICC Workshops). Piscataway:IEEE Press, 2020: 1-6. |
[56] | ZHAO T , LIU Y , SHOU G ,et al. Joint latency and energy consumption optimization with deep reinforcement learning for proximity detection in road networks[C]// Proceedings of 2021 7th International Conference on Computer and Communications (ICCC). Piscataway:IEEE Press, 2021: 1272-1277. |
[57] | 刘国志, 代飞, 莫启 ,等. 车辆边缘计算环境下基于深度强化学习的服务卸载方法[J]. 计算机集成制造系统, 2022,28(10): 3304-3315. |
LIU G Z , DAI F , MO Q ,et al. A service offloading method with deep reinforcement learning in edge computing empowered internet of vehicles[J]. Computer Integrated Making System, 2022,28(10): 3304-3315. | |
[58] | SHI J , DU J , WANG J ,et al. Distributed V2V computation offloading based on dynamic pricing using deep reinforcement learning[C]// Proceedings of 2020 IEEE Wireless Communications and Networking Conference(WCNC). Piscataway:IEEE Press, 2020: 1-6. |
[59] | KE H , WANG J , DENG L ,et al. Deep reinforcement learning-based adaptive computation offloading for MEC in heterogeneous vehicular networks[J]. IEEE Transactions on Vehicular Technology, 2020,69(7): 7916-7929. |
[60] | GENG L , ZHAO H , LIU H ,et al. Deep reinforcement learning-based computation offloading in vehicular networks[C]// Proceedings of 2021 8th IEEE International Conference on Cyber Security and Cloud Computing (CSCloud)/2021 7th IEEE International Conference on Edge Computing and Scalable Cloud (EdgeCom). Piscataway:IEEE Press, 2021: 200-206. |
[61] | ZHAN W , LUO C , WANG J ,et al. Deep reinforcement learning-based computation offloading in vehicular edge computing[C]// Proceedings of 2019 IEEE Global Communications Conference (GLOBECOM). Piscataway:IEEE Press, 2019: 1-6. |
[62] | 杨志和, 鲁凌云 . 基于强化学习的车辆编队动态定价任务卸载策略[J]. 电子技术与软件工程, 2022(5): 45-51. |
YANG Z H , LU L Y . Task offloading strategy of vehicle platoon dynamic pricing based on reinforcement learning[J]. Electronic Technology & Software Engineering, 2022(5): 45-51. | |
[63] | NI Y , HE J , CAI L ,et al. Joint roadside unit deployment and service task assignment for internet of vehicles (IoV)[J]. IEEE Internet of Things Journal, 2018,6(2): 3271-3283. |
[64] | WU Z , LU Z , HUNG P C K ,et al. QaMeC:a QoS-driven IoVs application optimizing deployment scheme in multimedia edge clouds[J]. Future Generation Computer Systems, 2019(92): 17-28. |
[65] | SHEN B , XU X , QI L ,et al. Dynamic server placement in edge computing toward internet of vehicles[J]. Computer Communications, 2021(178): 114-123. |
[66] | XU X , SHEN B , YIN X ,et al. Edge server quantification and placement for offloading social media services in industrial cognitive IoV[J]. IEEE Transactions on Industrial Informatics, 2020,17(4): 2910-2918. |
[67] | LU J , JIANG J , BALASUBRAMANIAN V ,et al. Deep reinforcement learning-based multi-objective edge server placement in Internet of vehicles[J]. Computer Communications, 2022(187): 172-180. |
[68] | LYU W , XU X , QI L ,et al. GoDeep:intelligent IoV service deployment and execution with privacy preservation in cloud-edge computing[C]// Proceedings of 2021 IEEE International Conference on Web Services (ICWS). Piscataway:IEEE Press, 2021: 579-587. |
[69] | KASI M K , ABU G S , AKRAM R N ,et al. Secure mobile edge server placement using multi-agent reinforcement learning[J]. Electronics, 2021,10(17): 2098. |
[70] | 熊凯, 冷甦鹏, 张可 ,等. 车联雾计算中的异构接入与资源分配算法研究[J]. 物联网学报, 2019,3(2): 20-27. |
XIONG K , LENG S P , ZHANG K ,et al. Research on heterogeneous radio access and resource allocation algorithm in vehicular fog computing[J]. Chinese Journal on Internet of Things, 2019,3(2): 20-27. | |
[71] | CUI Y , DU L , WANG H ,et al. Reinforcement learning for joint optimization of communication and computation in vehicular networks[J]. IEEE Transactions on Vehicular Technology, 2021,70(12): 13062-13072. |
[72] | HE Y , ZHAO N , YIN H . Integrated networking,caching,and computing for connected vehicles:a deep reinforcement learning approach[J]. IEEE Transactions on Vehicular Technology, 2017,67(1): 44-55. |
[73] | LUO Q , LI C , LUAN T H ,et al. Collaborative data scheduling for vehicular edge computing via deep reinforcement learning[J]. IEEE Internet of Things Journal, 2020,7(10): 9637-9650. |
[74] | YANG C , LIU B , LI H ,et al. Learning based channel allocation and task offloading in temporary UAV-assisted vehicular edge computing networks[J]. IEEE Transactions on Vehicular Technology, 2022,71(9): 9884-9895. |
[75] | TAN G , ZHANG H , ZHOU S ,et al. Resource allocation in MEC-enabled vehicular networks:a deep reinforcement learning approach[C]// Proceedings of IEEE INFOCOM 2020 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS). Piscataway:IEEE Press, 2020: 406-411. |
[76] | LYU Z , WANG Y , LYU M ,et al. Service-driven resource management in vehicular networks based on deep reinforcement learning[C]// Proceedings of 2020 IEEE 31st Annual International Symposium on Personal,Indoor and Mobile Radio Communications. Piscataway:IEEE Press, 2020: 1-6. |
[77] | PENG H , SHEN X . Multi-agent reinforcement learning based resource management in MEC-and UAV-assisted vehicular networks[J]. IEEE Journal on Selected Areas in Communications, 2020,39(1): 131-141. |
[78] | 张家波, 吕洁娜, 甘臣权 ,等. 一种基于强化学习的车联网边缘计算卸载策略[J]. 重庆邮电大学学报(自然科学版), 2022,34(3): 525-534. |
ZHANG J B , LYU J N , GAN C Q ,et al. A reinforcement learning-based offloading strategy for internet of vehicles edge computing[J]. Journal of Chongqing University of Posts and Telecommunications (Natural Science edition), 2022,34(3): 525-534. | |
[79] | 张海波, 王子心, 贺晓帆 . SDN和MEC架构下V2X卸载与资源分配[J]. 通信学报, 2020,41(1): 114-124. |
ZHANG H B , WANG Z X , HE X F . V2X offloading and resource allocation under SDN and MEC architecture[J]. Journal of communication, 2020,41(01): 114-124. | |
[80] | LIU Y , YU H , XIE S ,et al. Deep reinforcement learning for offloading and resource allocation in vehicle edge computing and networks[J]. IEEE Transactions on Vehicular Technology, 2019,68(11): 11158-11168. |
[81] | KZAMI S.M.A. , OTOUM S , HUSSAIN R ,et al. A novel deep reinforcement learning-based approach for task-offloading in vehicular networks[C]// Proceedings of 2021 IEEE Global Communications Conference (GLOBECOM). Piscataway:IEEE Press, 2021: 1-6. |
[82] | PAN C , WANG Z , LIAO H J ,et al. Asynchronous federated deep reinforcement learning-based URLLC-aware computation offloading in space-assisted vehicular networks[J]. IEEE Transactions on Intelligent Transportation Systems, 2022: 1-13. |
[83] | HAZARIKA B , SINGH K , BISWAS S ,et al. DRL-based resource allocation for computation offloading in IoV networks[J]. IEEE Transactions on Industrial Informatics, 2022,18(11): 8027-8038. |
[84] | SHI J , DU J , WANG J ,et al. Deep reinforcement learning-based V2V partial computation offloading in vehicular fog computing[C]// Proceedings of 2021 IEEE Wireless Communications and Networking Conference (WCNC). Piscataway:IEEE Press, 2021: 1-6. |
[85] | HUANG X , HE L , CHEN X ,et al. Revenue and energy efficiency-driven delay-constrained computing task offloading and resource allocation in a vehicular edge computing network:a deep reinforcement learning approach[J]. IEEE Internet of Things Journal, 2022,9(11): 8852-8868. |
[86] | ZHANG K , CAO J , ZHANG Y ,et al. Adaptive digital twin and multiagent deep reinforcement learning for vehicular edge computing and networks[J]. IEEE Transactions on Industrial Informatics, 2022,18(2): 1405-1413. |
[87] | ZHANG X , PENG M , YAN S ,et al. Joint communication and computation resource allocation in fog-based vehicular networks[J]. IEEE Internet of Things Journal, 2022,9(15): 13195-13208. |
[1] | 胡珈玮, 刘晓谦, 唐昕柯, 董宇涵. 基于DQN的UUV辅助水下无线光通信轨迹规划系统[J]. 电信科学, 2023, 39(5): 42-47. |
[2] | 廖熙雯, 冷甦鹏, 明昱君, 李天扬. 基于数字孪生的城市交通流智能预测与导引策略[J]. 电信科学, 2023, 39(3): 70-79. |
[3] | 刘雅琼, 吕哲, 赵亚飞, 寿国础. AI技术在卫星通信/互联网领域的应用综述[J]. 电信科学, 2023, 39(2): 10-24. |
[4] | 汪晗, 刁磊, 王梦玲, 荣欣, 李佳珉, 尤肖虎. 工业物联网中URLLC的关键问题分析[J]. 电信科学, 2022, 38(Z1): 77-92. |
[5] | 邓丹昊, 王朝炜, 江帆, 王卫东. 无人机辅助无蜂窝大规模MIMO中的空地协同调度[J]. 电信科学, 2022, 38(8): 37-44. |
[6] | 伍仲丽, 曹园园, 黄文睿, 戴彬, 莫益军. 面向确定性网络的按需智能路由技术[J]. 电信科学, 2021, 37(11): 11-16. |
[7] | 桂飞,程阳,李丹,洪思虹. 互联网智能路由架构及算法[J]. 电信科学, 2020, 36(10): 12-20. |
[8] | 高建,陈文彬,庞建民,于华东,宋国瑞. 基于组合密钥的智能电网多源数据安全保护[J]. 电信科学, 2020, 36(1): 134-138. |
[9] | 李邱苹,赵军辉,贡毅. 移动边缘计算中的计算卸载和资源管理方案[J]. 电信科学, 2019, 35(3): 36-46. |
阅读次数 | ||||||
全文 |
|
|||||
摘要 |
|
|||||
|