Journal on Communications ›› 2022, Vol. 43 ›› Issue (6): 223-234.doi: 10.11959/j.issn.1000-436x.2022114

• Correspondences • Previous Articles     Next Articles

Multi-objective optimal offloading decision for cloud-edge collaborative computing scenario in Internet of vehicles

Sifeng ZHU1, Jianghao CAI1, Zhengyi CHAI2, Enlin SUN1   

  1. 1 School of Computer and Information Engineering, Tianjin Chengjian University, Tianjin 300384, China
    2 School of Computer Science &Technology, Tiangong University, Tianjin 300387, China
  • Revised:2022-05-09 Online:2022-06-01 Published:2022-06-01
  • Supported by:
    The National Natural Science Foundation of China(61972456);The Natural Science Foundation of Tianjin(20JCYBJC00140);The Open Project Fund of the Key Laboratory of Universal Wireless Communications (BUPT) of the Minis-try of Education(KFKT-2020101)

Abstract:

Objectives:Computing tasks in Internet of vehicles are very sensitive to offloading delay, cloud-edge collaborative computing is required to meet such requirements. However,the characteristics of fast movement of vehicles in the Internet of vehicles make the conventional cloud-edge collaborative model not applicable. Combined with vehicle-to-vehicle communication technology and edge caching technology, this paper explores a cloud-edge collaborative computing offloading model suitable for The Internet of vehicles.

Methods:Aiming at the problem that in the cloud-edge collaborative computing scenario of the Internet of vehicles, it is a challenging problem how to efficiently offload services, and simultaneously consider the offloading decisions of services with the collaborative resource allocation of edge servers and cloud servers, a vehicle computing network architecture based on cloud-edge collaboration was designed. In this architecture, vehicle terminals, cloud servers and edge servers could provide computing services. The cache strategy was introduced into the scenario of Internet of vehicles by classifying cache tasks. The cache model, delay model, energy consumption model, quality of service model and multi-objective optimization model were designed successively,the maximum unload delay of tasks is introduced into the quality of service model. An improved multi-objective optimization immune algorithm(MOIA)was proposed for offloading decision making, the algorithm is a multi-objective evolutionary algorithm, mainly through the combination of immune thought and reference point strategy to achieve the optimization of multi-objective problems.

Results:Finally,the effectiveness of the proposed offloading decision scheme was verified by comparative experiments. Experimental results show that the computational offloading model proposed in this paper can cope with tasks with different requirements and has good adaptability under the condition of meeting the maximum offloading delay. Offloading delay in this design model is mainly composed of seven parts: The cache delay of service application required by task downloading from server, the uploading delay of task uploading from vehicle to edge server,the uploading delay of task uploading from edge server to cloud server, the execution delay required by task,the queuing delay required by task on server, the transmission delay required for tasks to be transmitted across regions through the server and the transmission delay required for tasks to be transmitted through vehicle-to-vehicle communication. In the experiment of communication strategy and cache strategy, it can be seen that each part of the delay in this paper has a relatively close relationship.The effect of cache strategy is tested by canceling half of cacheable edge cache service applications(MOIA-C).The results show that the total offload delay and cache delay of MOIA-C scheme increase 35.88% and 196.85% respectively compared with MOIA scheme, which is due to the decrease in the number of cacheable service applications. The scheme is more inclined to offload tasks to the cloud server that caches all service applications and has higher performance. As a result, the uploading delay of tasks from edge server to cloud server and the queuing delay of tasks on the server increase, the execution delay decreases,the system energy consumption decreases,and the service quality index increases.The communication strategy adopts the hybrid transmission mode based on server communication and vehicle-to-vehicle communication.The experiment of the communication strategy is realized by canceling the communication mode based on vehicle-to-vehicle technology (MOIA-S). The results show that the total offloading delay and communication delay of MOIA-S scheme increased by 58.45% and 433.33% respectively compared with MOIA scheme. This is due to the extreme bandwidth strain of using only the server to transport tasks. In order to reduce the bandwidth pressure caused by cross-region task transmission,the scheme tends to offload the task to the cloud server.Therefore,the cache delay of service application and the processing delay of the task decrease, the queuing delay increases, the system energy consumption decreases,and the quality of service index increases.

Conclusions:Based on vehicle-to-vehicle communication technology and edge caching technology, this paper proposes an adaptive service caching and task offloading strategy,which can effectively reduce the total delay of vehicles tasks and the energy consumption of vehicles while ensuring the quality of service, and provide better service for high-delay-sensitive tasks in Internet of vehicles scenarios.

Key words: Internet of vehicles, cloud-edge collaboration, offloading decision, edge cache, multi-objective optimization immune algorithm

CLC Number: 

No Suggested Reading articles found!