DOI QR코드

DOI QR Code

A Context-aware Task Offloading Scheme in Collaborative Vehicular Edge Computing Systems

  • Jin, Zilong (School of Computer and Software, Nanjing University of Information Science and Technology) ;
  • Zhang, Chengbo (School of Computer and Software, Nanjing University of Information Science and Technology) ;
  • Zhao, Guanzhe (Huihua College of Hebei Normal University) ;
  • Jin, Yuanfeng (Department of Physics, Yanbian University) ;
  • Zhang, Lejun (College of Information Engineering, Yangzhou University)
  • 투고 : 2020.12.11
  • 심사 : 2021.01.23
  • 발행 : 2021.02.28

초록

With the development of mobile edge computing (MEC), some late-model application technologies, such as self-driving, augmented reality (AR) and traffic perception, emerge as the times require. Nevertheless, the high-latency and low-reliability of the traditional cloud computing solutions are difficult to meet the requirement of growing smart cars (SCs) with computing-intensive applications. Hence, this paper studies an efficient offloading decision and resource allocation scheme in collaborative vehicular edge computing networks with multiple SCs and multiple MEC servers to reduce latency. To solve this problem with effect, we propose a context-aware offloading strategy based on differential evolution algorithm (DE) by considering vehicle mobility, roadside units (RSUs) coverage, vehicle priority. On this basis, an autoregressive integrated moving average (ARIMA) model is employed to predict idle computing resources according to the base station traffic in different periods. Simulation results demonstrate that the practical performance of the context-aware vehicular task offloading (CAVTO) optimization scheme could reduce the system delay significantly.

키워드

1. Introduction

The emergence of cloud computing and 5G communication technology have caused the transformation of the automotive industry. The Internet of Vehicles (IoV) [1] devices have further computational and service capability. Specifically, they can provide wireless communication services for vehicle terminals, roadside units (RSUs), and pedestrians in intelligent transportation systems, realizing vehicle to vehicle (V2V), vehicle to infrastructure (V2I), vehicle to person (V2P), and vehicle to network (V2N) communication modes [2].

Nevertheless, the drawbacks of cloud computing are gradually exposed with the explosive growth of vehicles and vehicle equipment. The traditional cloud computing solutions [3,4] with high latency and low-reliability are difficult to meet users’ higher requirements for transmission bandwidth and data processing delay in terms of real-time and security. It is easy to cause frequent traffic jams and traffic accidents. Also, due to the limited computing capability of automobile terminal, the increasing vehicle data has become an essential factor in restricting the development of intelligent transportation.

To cope with this disturbing problem, the emerging technique, i.e., mobile edge computing (MEC) [5-7], is applied to support delay-critical services and compute-intensive applications. The fusion of MEC and IoV technologies has developed into vehicular edge computing (VEC). Driven by VEC technology [8,9], computing resources have been pushed to the edge of RSUs. The task generated during vehicle driving can be executed locally or offloaded to MEC, making up for the shortcomings of considerable transmission delay and unstable connection of traditional centralized IoV network [10,11].

Computing offloading of VEC is a current research hotspot [12-16]. However, there are some deficiencies hidden in the existing works. First of all, existing models did not consider the priority of processing when multiple tasks are concurrent, i.e., the priority of driverless cars is definitely higher than that of human-crewed vehicles. This is because driverless cars have stricter requirements for data processing delay and safe driving. Furthermore, current research ignored the limitation of MEC server resources and MEC server clusters’ load balancing [17,18].

To address the above problems and further reduce the task offloading delay to guarantee driving safety and traffic efficiency. This paper proposes a context-aware vehicular task offloading (CAVTO) optimization scheme to solve computation and communication resources allocation problem in a delay-sensitive VEC system. The main contributions in this paper are summarized as follows.

● A context-aware task offloading framework based on software defined network (SDN) and network function virtualization (NFV) technology are modeled in a collaborative VEC system. Furthermore, the optimization function of delay minimization is formulated as an NP-hard problem.

● A differential evolution (DE) is proposed to solve the joint optimization problem of offloading decision and resource allocation. The ARIMA model is used to predict idle computing resources to cooperate vehicular task offloading, improve resource utilization, and reduce system delay.

● The performance of the context-aware vehicular task offloading (CAVTO) optimization scheme is evaluated by comparing it with other baseline algorithms. Simulation results show that the CAVTO scheme could generate an allocation strategy close to the optimal.

This paper is organized as follows: An overview of the related work is summarized in Section 2. Section 3 presents the offloading framework and formulates the system model. Problem solution and algorithm proposal in Section 4. Experiment and simulation in Section 5. At last, a summary of this paper is presented in Section 6.

2. Related Works

The computing offloading of VEC is a research hotspot. The current computing offloading strategies can be divided into the following aspects: delay minimization, energy consumption minimization, weighing energy consumption and delay. According to different models, design one or multiple optimization goals and then use appropriate algorithms to solve them.

From the perspective of offloading architecture design. Sun et al. [19] considered offloading tasks to service vehicles with the same moving direction within a specific communication range. An ALTO algorithm based on MAB theory was proposed to achieve offloading delay minimization. Research [20,21] employed vehicle cloud and remote cloud collaborative offloading method, the task is offloaded to vehicle cloud preferentially, and then offloaded to the remote cloud when computing resources are insufficient, [21] consider the heterogeneity of vehicle based on [20]. Zhu et al. [22] deployed base stations in different regions, vehicles entering and leaving must be reported to the base stations, and the base stations are responsible for task scheduling. Huang et al. [23] creatively proposed a PVEC algorithm that utilized parked vehicles as idle edge computing nodes and formulated a resource scheduling optimization problem.

From the perspective of offloading strategy optimization goal. Sun et al. [24] formulated a mixed-integer nonlinear programming problem to maximize the system’s offloading utility with the joint optimization of offloading decision and task scheduling. Zhang et al. [25] proposed an offloading framework by considering the heterogeneous and the mobility of vehicles. The optimization goal is to minimize the total cost of vehicle task offloading under the constraints delay. Dai et al. [26] divided the offloading problem into two sub-problems: specifically, the optimal selection of VEC servers and joint optimization of load balancing and offloading decisions. Tang et al. [27] minimized the system delay under the condition of energy constraints. A decision tree algorithm is proposed to solve the task deployment sub- problem, and a dynamic programming technique is proposed to solve the delayed offloading sub-problem. Guo et al. [28] proposed a novel resource allocation mechanism for edge computing resource providers, which performs task offloading under the condition of observing the resource constraints on the edge server, takes the supplier's income as the optimization target, and then formulates the resource allocation problem.

From the perspective of the offloading algorithm. Liu et al. [29] used a semi-Markov decision process and linear programming to solve the optimal multi-resource allocation problem. Klaimi et al. [30] proposed a dynamic resource allocation algorithm based on game theory, which minimized CPU resource and energy consumption from the aspects of delay and request blocking probability. Finally, the existence of Nash equilibrium was proved. Feng et al. [31] adopted an ant colony optimization to solve the task scheduling problem in the AVE framework. Tham et al. [32] designed a load balancing optimization method based on a convex optimization algorithm, which improved the convergence speed and optimized the average system utilization. Wei et al. [33] combined Q-learning with DNN to optimize the problem of computing offloading in wireless cellular networks. Li et al. [34] adopted the ADMM method to study how to perform regression analysis when training samples are kept secret on the source device.

In the above research scheme, the impact of idle resources on the overall computing offloading performance has been ignored. In addition, the edge servers’ load balance is not considered on the premise of guaranteeing user service quality to improve the system operation efficiency from the perspective of the service provider. Finally, the communication resources, computing resources, and storage capacity of the vehicular edge computing network are limited, and if ignore the reasonable allocation of such resources, it will not be able to deal with the enormous data generated by the vehicle equipment.

3. System Architecture and Problem Formulation

In this section, a context-aware task offloading framework based on SDN and NFV technology is modeled in the collaborative VEC system. Furthermore, the delay minimization problem is formulated by optimizing MEC server resource allocation and vehicle task offloading decisions.

3.1 System Architecture

As illustrated in Fig. 1, a vehicular task offloading framework with multiple SCs and multiple RSUs has been considered in this paper. The MEC servers are deployed within the range of RSUs to provide computing services for resource-constrained vehicles. The RSUs can communicate with multiple vehicles simultaneously through massive multiple-input multiple-output (MIMO) [35,36] technology. The virtual machine is deployed on each MEC server to facilitate centralized management of the network environment to realize real-time data perception and rapid response.

E1KOBZ_2021_v15n2_383_f0001.png 이미지

Fig. 1. System model of CAVTO architecture

In order to further improve system resource utilization, SDN and NFV technology are used to support VEC system architecture. SDN is a new type of network design concept [37] that uses a layered idea to separate the data plane from the control plane, realizes openness and programmability, and breaks the closure of traditional network equipment. NFV [38] builds many types of network equipment (such as servers, switches, storage, etc.) into a data center network and forms a virtual machine (VM) [39] by borrowing IT virtualization technology to make it run on standard server virtualization software so that it can be installed anywhere in the network without the need to deploy new hardware devices.

Overall, the SDN/NFV-based architecture can be divided into three parts. The bottom is Data Plane, which comprises date collection from vehicle ad-hoc network (VANET), RSUs, and MEC servers. The middle layer is Control Plane, and the SDN controller uniformly manages the vehicle and road information collected by RSUs. The top layer is the core network layer, supporting information sharing and service migration scheduling between MEC servers.

The Fig. 2 shows the detailed function modules in the control plane. It is made up of three main components, namely, Information Collection, Context-Aware, and Decision Model.

E1KOBZ_2021_v15n2_383_f0002.png 이미지

Fig. 2. Context-aware task offloading decision model

● Information Collection: this module is mainly responsible for the extraction and collection of information, includes the speed, direction, and driving attributes of the vehicle. It also perceives information such as the accessible RSU within the vehicle communication range, as well as the computing resources and communication resources of the attached MEC services.

● Context-Aware: as the core part of the entire computing offloading framework, this module provides load balancing, real-time monitoring, resource awareness, and prediction services. The load balancer can improve server performance and effectively prevent data packet loss, processing speed limit or even collapse caused by server overload. The monitoring service provides real-time monitoring of devices’ operation in the edge network. In addition, the ARIMA algorithm is employed in this module to learn the MEC servers’ load changes and predict idle computing resources to achieve resource scheduling.

● Decision Model: according to the results of context-aware, make corresponding resource allocation strategy, offloading strategy and scheduling strategy, and send them to RSUs and vehicles for execution.

3.2 Offloading Model

Table 1 shows some important parameters in the vehicular edge computing model, we assume that there are n SCs driving on the road. Each SC has k tasks to handle, the j-th computation task of i-th SC is denoted by Taskij, where \(i \in N\)\(N=\{1,2, \cdots, n\}\), and \(j \in K\), \(K=\{1,2, \cdots, k\}\). For each Taskij, it can be represented by a tuple \(\left\{b, w, t^{\text {limit }}\right\}\), where b represents the data size of the computing task, w represents the priority of task processing, the purpose is to distinguish this task as a traditional computing task or a safety-oriented computing task, tlimit is the delay constraint. There are also m RSUs evenly distributed on the side of the road. Each RSU equipped with a MEC server can be regarded as a service node. The heterogeneity of RSU and MEC servers enables service nodes to have different coverage and computing capabilities. For each service nodes Sm, it can be represented by a tuple \(\left\{r_{m}, l_{m}, f_{m}\right\}\), in which is the coverage of RSU, lm is the vertical distance from RSU to the roadside, and fm is the computing capability of the MEC server.

Table 1. Model parameters

E1KOBZ_2021_v15n2_383_t0001.png 이미지

Due to the limited computing resources of the vehicle itself, it is not enough to support the completion of the entire computing task locally. Offloading tasks to nearby RSUs with MEC servers is an effective solution. For each independent subtask, its offloading strategy can be expressed as:

\(X_{ij}=\left\{ 0, 1 \right\}\)\(X_{i j}=\{0,1\}\)       (1)

where Xij =0 denotes the task will be executed locally, and Xij =1 denotes the task will be offloaded to RSUs for processing.

The entire offloading process is divided into three stages. First, the SCs upload the computation task to the RSUs through the V2I communication method. Then, RSUs transfer the task to the MEC servers and take advantage of powerful computing resources to process the task. Finally, the computation result will be returned to SCs. Therefore, the delay in the entire offloading process is mainly caused by the task upload delay, the computing delay on MEC servers, and the result return delay. Since RSUs and MEC servers are connected by optical fiber, the transmission delay is negligible.

3.2.1 Local Computing Model

When the subtask of the vehicle is executed locally, the local computing delay can be expressed as :

\(T_{ij}^{local} = \frac{(1-x_{ij}) \cdot b_{ij} \cdot C^{local}}{f_i^{local}}\)      (2)

where bij is the date size of Taskij, Clocal means CPU cycles per bit of vehicle, and filocal is the computing capacity of i-th SC.

3.2.2 MEC Computing Model

According to the scene shown in Fig. 3, the vehicle Ctravels at a constant speed at the speed of Vi. Due to the mobility of the vehicle, the distance between the vehicle and the center of RSU is constantly changing. The \(T_{i}^{\text {stay }}\) is the time from entry to the departure of the vehicle Ci within the coverage of RSU, which can be specifically expressed as:

\(T_{i}^{s t a y}=\frac{2 \sqrt{r^{2}-l^{2}}}{v_{i}}\)       (3)

E1KOBZ_2021_v15n2_383_f0003.png 이미지

Fig. 3. Driving map of the vehicle within the coverage of RSU

When the vehicle reaches the coverage area of the RSUs, the SCs communicate with RSUs through LTE-V2I mode. According to the Shannon formula, the uplink transmission rate can be calculated as:

\(r^{u p}=B_{i j} \cdot \log _{2}\left(1+\frac{p_{i} \cdot h_{i}}{B_{i j} \cdot N_{\sigma}}\right)\)        (4)

where Bij denotes allocated channel bandwidth, Pis the transmit power of i-th SC, and hi means channel gain.

The transmission delay of the uplink can be expressed as:

\(T_{i j}^{u p}=\frac{d_{i j} \cdot X_{i j}}{r^{u p}}\)       (5)

After the vehicular task is uploaded to the RSU, computing resources are provided through the attached MEC server. Therefore, the computing delay of task processing can be expressed as:

\(T_{i j}^{c o m}=\frac{d_{i j} \cdot X_{i j} \cdot C^{m e c}}{f_{m}^{m e c}}\)       (6)

where Cmec implies the number of CPU cycles of unit data in the MEC system and \(f_{m}^{m e c}\) represents computing resource allocated by m-th MEC server.

Finally, after the task has been processed, the MEC servers return the computing result to SCs. The downlink transmission delay is:

\(T_{i j}^{\text {down }}=\frac{\alpha \cdot d_{i j} \cdot X_{i j}}{r^{\text {down }}}\)       (7)

where α is the ratio of the size of the upload task and the returned computing result.

It is worth noting that the MEC servers can only start processing the task after the server has received the task of WDs completely, and the MEC servers can only start sending back the computing results at the end of completing the entire computing task.

In summary, the total computing offloading delay of processing Taskij can be expressed as:

\(T_{i j}^{m e c}=T_{i j}^{u p}+T_{i j}^{c o m}+T_{i j}^{d o w n}\)       (8)

3.3 Problem Formulation

Based on the above-mentioned offloading model, the goal of this article is to minimize the average task processing delay of the vehicles through the joint optimization of computing resource allocation and transmission bandwidth allocation. An objective function Obj is introduced, which can be expressed as:

\(O b j=\frac{\sum_{i=1}^{n} \sum_{j=1}^{k}\left(T_{i j}^{l o c a l}+T_{i j}^{m e c}\right)}{n k}\)       (9)

The final computing offloading problem model can be established as:

\(\begin{gathered} \min \operatorname{Obj}\left(X_{i j}, B_{i j}, f_{m}^{m e c}, w_{i}\right) \\ \text { s.t. } \quad C_{1}: X_{i j}=\{0,1\} \\ C_{2}: 0 \leq \sum_{i=1}^{n} B_{i j} \leq B_{\text {tootal }} \\ C_{3}: 0 \leq \sum_{m=1}^{m} f_{m}^{m e c} \leq f_{\text {total }} \\ C_{4}: T_{i}^{\text {mec }} \leq T_{i}^{\text {stay }} \\ C_{5}: \max \sum_{i=1}^{n} w_{i} \end{gathered}\)       (10)

where constraint C1 is the offloading decision of vehicle, constraint C2 means the allocated bandwidth cannot exceed the total bandwidth, constraint Cmeans the allocated computing resource cannot exceed the total resource of MEC servers, constraint Cis set to ensure that the task processing will not be interrupted, the computing task is required to be completed before the vehicle leaves the range of the RSU, constraint C5 is to maximize the weight, that is, to deal with the higher priority security tasks preferentially.

4. Proposed Scheme

In this section, we propose a CAVTO optimization scheme to solve the above optimization problem. The CAVTO optimization scheme is mainly divided into two parts. Firstly, a differential evolution algorithm (DE) is proposed to optimize offloading decision and resource allocation in collaborative vehicular edge computing networks to minimize the average delay. Secondly, the ARIMA-based machine learning algorithms are used to predict idle computing resources to ensure MEC server load balance and improve utilization efficiency. The specific expression is as follows.

4.1 Latency Optimization

The optimization problem of the above equation (10) is an NP-hard problem. We consider adopting the DE algorithm to get the optimal solution. DE algorithm is a heuristic search algorithm based on the biological evolution process. Simulate the problem to be solved as a biological evolution process, and find the optimal solution through evolution. DE algorithms usually start with a set of possible potential solutions, which are composed of genetically encoded individuals. After fitness calculation, selection, crossover, and mutation, these individuals evolve from generation to generation to produce better approximate solutions. Algorithm 1 shows detailed steps.

Algorithm 1 Latency Optimization Based on Differential Evolution Algorithm

The floating-point encoding is employed to replace traditional binary encoding, which can effectively reduce storage space and reduce algorithm complexity. The chromosome coding method is shown in Fig. 4, the total number of computing tasks for all vehicles is set to the chromosome length, each gene represents a computing task , and the value of the corresponding gene represents offloading strategy , bandwidth allocation strategy computing resource allocation strategy and task processing priority , whose strategy set can be expressed as:

\(S_{i j}=\left\{X_{i j}, B_{i j}, f_{m}^{m e c}\right\}\)       (11)

E1KOBZ_2021_v15n2_383_f0011.png 이미지

Fig. 4. Chromosome encoding

After the population coding is completed, the fitness needs to be set. The greater the fitness, the more chance it will be inherited to the next generation. Since the objective function is to minimize the average delay of vehicular offloading, the fitness is set as:

\(\text { Fit }=\frac{1}{O b j}\)       (12)

The principle of selection is that the higher fitness, the more likely to be selected. According to the roulette wheel selection, the selection probability is expressed as:

\(p(i)=\frac{Fit(i)}{\Sigma^N_i=1 Fit(i)}\)       (13)

The mutation operation is to change the gene position, so that the algorithm has the ability of local random search and avoid premature convergence. Randomly select three different individuals in the strategy set, the produced intermediate expressed as:

\(I_{i}(g+1)=S_{r 1}(g)+F \cdot\left(S_{r 2}(g)-S_{r 3}(g)\right)\)       (14)

where F is the scaling factor.

The purpose of crossover operation is to increase the diversity of solutions, The result of crossover between Si(g) and Ii(g+1) is:

\(H_{i}(g+1)=\left\{\begin{array}{cc} I_{i}(g+1) & \operatorname{rand}(0,1) \leq C R \\ S_{i}(g) & \text { otherwise } \end{array}\right.\)       (15)

where CR is crossover probability.

Through the above steps, the solution set of the strategy \(S_{f j}=\left\{X_{i j}, B_{i j}, f_{m}^{\operatorname{mex}}\right\}\)evolves from generation to generation, producing better and better approximate solutions. Finally, record the optimal solution \(S^{*}\), and calculate the minimum average delay \(O b j^{*}\).

4.2 Resource Prediction

In context-aware collaborative vehicular edge computing networks, the SDN controller can monitor MEC servers load changes in real time. In different regions, the traffic flow at different time periods is very different, which leads to server load unbalanced. In order to improve resource utilization, adaptively learning the load changes of the server and realizing resource prediction and scheduling is an effective way to improve traffic conditions. Therefore, the ARIMA model is applied to predict the number of vehicles arriving in the area in the next time period based on the historical data of traffic flow.

The basic idea of the ARIMA model [40] is that the time-varying data sequence is a random sequence, which can be described by a certain mathematical model to predict the future value from the past value and the current value of the time series. The ARIMA model can be expressed as:

\(y_{T}=\mu+\sum_{i=1}^{p} Y_{i} y_{T-i}+\varepsilon_{T}+\sum_{i=1}^{q} \theta_{i} \varepsilon_{T-i}\)       (16)

where p is the autoregressive order, d denotes the difference times and q is the average moving order.

Suppose there are R regions in the vehicular edge network, which can be represented by set \(\{1,2, \cdots r, \cdots R\}\). \(P^{T}\left(A_{r}\right)\) is the traffic flow of region Ain period T. According to the traffic flow over the past T periods \(P^{1}\left(A_{r}\right), P^{2}\left(A_{r}\right), \cdots, P^{T}\left(A_{r}\right)\), the traffic flow in period T+1 can be predicted. The greater the traffic flow in a period of time, the higher the server load in the area. After the prediction is completed, the sequence \(P^{T+1}\left(A_{1}\right), P^{T+1}\left(A_{2}\right), \cdots, P^{T+1}\left(A_{r}\right)\) can be get. In descending order according to traffic flow, take out the top regions and bottom regions. Dispatch the computing resources in the regions to the regions, and cooperate to complete the computing offloading.

The resource prediction based on ARIMA model mainly consists of 4 steps.

Step 1: Check whether the data sequence over the past periods \(P^{1}\left(A_{r}\right), P^{2}\left(A_{r}\right), \cdots, P^{T}\left(A_{r}\right)\) is stationary. If the series is nonstationary, perform d order difference operation until a stationary sequence is obtained.

Step 2: Determine ARIMA model parameters p and q according to sequence characteristics.

Step 3: Calculate the autocorrelation function and partial correlation function of the time series, check whether the ARIMA model (p,d,q) is satisfied.

Step 4: If satisfied, forecast the traffic flow \(P^{T+1}\left(A_{r}\right)\) in period T + 1. Otherwise, repeat Step 2 and Step 3.

Step 5: Sort sequence \(P^{T+1}\left(A_{1}\right), P^{T+1}\left(A_{2}\right), \cdots, P^{T+1}\left(A_{r}\right)\) in descending order to implement resource scheduling.

4.3 CAVTO Optimization Scheme

Combining the DE-based delay optimization algorithm and ARIMA-based resource prediction algorithm, the CATVO optimization scheme is proposed in Algorithm 2. First, based on historical data, the traffic flow of different areas in the next time period can be predicted. The traffic flow indirectly reflects the load condition of the server in this region. Then, low-load MEC servers assists in performing computing offloading for high-load MEC servers. Finally, the DE algorithm is adopted to jointly optimize resource allocation and offloading decisions. The result shows that the CATVO optimization scheme not only effectively reduces the offloading delay, but also improves resource utilization.

Algorithm 2 CAVTO optimization scheme

5. Simulation Results and Analysis

In this section, we set the environmental parameters of the simulation experiment and evaluate the performance of the CAVTO optimization scheme by comparing other baseline algorithms. The specific details are shown as follows.

5.1 Parameters Settings

The scenario assumed in this article is on a one-way traffic road. The task of the driving vehicle can be executed locally or offloaded to the RSUs. Due to the heterogeneity of vehicles, assume that the number of computing tasks for the vehicle is 5-10. The computing capability of each vehicle is randomly distributed as 4×106~ 2×107cycles/s, and each car travels at a certain speed. The specific simulation parameters are shown in Table 2.

Table 2. Simulation parameters

E1KOBZ_2021_v15n2_383_t0002.png 이미지

5.2 Experiment Results

Suppose there are five vehicles driving RSU in the same time period. In order to allocate resources reasonably, make wise offloading decisions. The DE algorithm is employed to minimize the average system delay. As shown in Fig. 5, we initialize the population size to 100, evolution algebra is set as 150, and reorganization probability is 0.1. The experimental results show that the optimal objective function value gets the minimum 1806 ms after 140 iterations. The detailed allocation strategy of the DE algorithm is shown in Fig. 6.

E1KOBZ_2021_v15n2_383_f0004.png 이미지

Fig. 5. The DE offloading scheme

E1KOBZ_2021_v15n2_383_f0005.png 이미지

Fig. 6. Allocation strategy of DE algorithm

In terms of idle resource prediction, we use the ARIMA model to predict the traffic flow in the next period. The data set is provided by the Korea Expressway Corporation [41], the data format is shown in Fig. 7. This data records the hourly traffic flow on a certain highway.

E1KOBZ_2021_v15n2_383_f0006.png 이미지

Fig. 7. Traffic flow data set

After initial processing of the data, we calculate the characteristic of standardized residual, histogram plus estimated density, autocorrelation function, and partial correlation function, etc. The purpose is to analyze the reliability and periodicity of the data, and check whether the data obey the normal distribution. The result is as shown in Fig. 8. The calculation shows that the data set has good stationarity and is suitable for the ARIMA model.

E1KOBZ_2021_v15n2_383_f0007.png 이미지

Fig. 8. Model sequence diagram

Fig. 9 presents the traffic flow prediction based on the ARIMA model. According to the traffic flow in the past six days, predict the traffic flow situation within 24 hours. Set the time node with less traffic as the idle resource node of this region, and dispatch computing resources to other regions to improve offloading efficiency.

E1KOBZ_2021_v15n2_383_f0008.png 이미지

Fig. 9. Traffic flow prediction based on ARIMA model

5.3 Algorithm Performance Evaluation

In order to further evaluate the performance of the CAVTO optimization scheme, compare CAVTO with the following offloading schemes: 1) Local Execution (LE): all the computing tasks of vehicle execute locally. 2) MEC Execution (ME): all the computing tasks will be offloaded to RSUs for execution. 3) Differential Evolution algorithm (DE): which without considering resource forecast and schedule.

The Fig. 10 reveals the relationship between the number of vehicles and the average delay. It can be seen from the diagram, as the number of vehicles increases, the CAVTO scheme and DE algorithm show a slow upward trend; the DE algorithm is inferior to CATVO in average delay because there is no prediction and scheduling of idle resources. The ME strategy offloads all computing tasks to RSUs. In the beginning, the delay optimization performed well. However, as the number of vehicles increased, channel congestion was caused, and the supply of resources exceeded demand, which causes a large delay. In summary, CAVTO is superior to the other three algorithms and reduces the average delay by up to 16% compared with the DE strategy.

E1KOBZ_2021_v15n2_383_f0009.png 이미지

Fig. 10. The average delay of tasks processing with the increasing of vehicles

Furthermore, in order to study the number of vehicular tasks with different priorities, the total weight (\(\sum_{i=1}^{n} w_{i}\)) of task completion under different offloading schemes has been researched through repeated experiments.

It can be seen from Fig. 11, the total weight of the CAVTO optimization scheme is the largest, which explains that the strategy pays more attention to priority processing of high-priority tasks. Compared with DE, ME, LE strategies that do not consider priority, the CAVTO optimization scheme has increased by 20%, 31%, and 52%, respectively.

E1KOBZ_2021_v15n2_383_f0010.png 이미지

Fig. 11. The total weight of tasks completion under different offloading schemes

6. Conclusion

In this paper, an effective CAVTO optimization scheme is proposed in a vehicular edge computing system with multiple SCs and multiple MEC servers. Technologically, an improved differential evolution algorithm is designed to figure out the joint optimization problem of offloading decision and resource allocation. Furthermore, the ARIMA-based machine learning algorithms are used to predict idle computing resources to ensure MEC server load balance and improve utilization efficiency. The experimental results demonstrate that the CAVTO optimization scheme could generate a near-best resource allocation strategy by comparing baseline algorithms and reduce the system delay significantly.

참고문헌

  1. J. Cheng, J. Cheng, M. Zhou, F. Liu, S. Gao, and C. Liu, "Routing in Internet of Vehicles: A Review," IEEE Transactions on Intelligent Transportation Systems, vol. 16, no. 5, pp. 2339-2352, Oct. 2015. https://doi.org/10.1109/TITS.2015.2423667
  2. G. Araniti, C. Campolo, M. Condoluci, A. Iera, and A. Molinaro, "LTE for vehicular networking: a survey," IEEE Communications Magazine, vol. 51, no. 5, pp. 148-157, May 2013. https://doi.org/10.1109/MCOM.2013.6515060
  3. N. Abbas, Y. Zhang, A. Taherkordi, and T. Skeie, "Mobile Edge Computing: A Survey," IEEE Internet of Things Journal, vol. 5, no. 1, pp. 450-465, Feb. 2018. https://doi.org/10.1109/jiot.2017.2750180
  4. H. Zhang, G. Chen, and X. Li, "Resource management in cloud computing with optimal pricing policies," Computer Systems Science and Engineering, vol. 34, no. 4, pp. 249-254, 2019.
  5. T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, "Collaborative Mobile Edge Computing in 5G Networks: New Paradigms, Scenarios, and Challenges," IEEE Communications Magazine, vol. 55, no. 4, pp. 54-61, Apr. 2017. https://doi.org/10.1109/MCOM.2017.1600863
  6. W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, "Edge Computing: Vision and Challenges," IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637-646, Oct. 2016. https://doi.org/10.1109/JIOT.2016.2579198
  7. Y. Mao, C. You, J. Zhang, K. Huang, and K. B. Letaief, "A Survey on Mobile Edge Computing: The Communication Perspective," IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2322-2358, 2017. https://doi.org/10.1109/COMST.2017.2745201
  8. J. Zhang and K. B. Letaief, "Mobile Edge Intelligence and Computing for the Internet of Vehicles," Proceedings of the IEEE, vol. 108, no. 2, pp. 246-261, Feb. 2020. https://doi.org/10.1109/jproc.2019.2947490
  9. W. Tong, A. Hussain, W. X. Bo, and S. Maharjan, "Artificial Intelligence for Vehicle-to-Everything: A Survey," IEEE Access, vol. 7, pp. 10823-10843, Jan. 2019. https://doi.org/10.1109/access.2019.2891073
  10. H. Gao, W. Huang, and X. Yang, "Applying Probabilistic Model Checking to Path Planning in an Intelligent Transportation System Using Mobility Trajectories and Their Statistical Data," Intelligent Automation and Soft Computing, vol. 25, no. 3, pp. 547-559, 2019.
  11. J. Liu, X. Kang, C. Dong, and F. Zhang, "Simulation of Real-Time Path Planning for Large-Scale Transportation Network Using Parallel Computation," Intelligent Automation and Soft Computing, vol. 25, no. 1, pp. 65-77, 2019.
  12. W. Zhan, C. Luo, J. Wang, C. Wang, G. Min, H. Duan, and Q. Zhu, "Deep-Reinforcement-Learning-Based Offloading Scheduling for Vehicular Edge Computing," IEEE Internet of Things Journal, vol. 7, no. 6, pp. 5449-5465, June 2020. https://doi.org/10.1109/jiot.2020.2978830
  13. X. Cao, H. Yu, and H. Sun, "Dynamic Task Assignment for Multi-AUV Cooperative Hunting," Intelligent Automation and Soft Computing, vol. 25, no. 1, pp. 25-34, 2019.
  14. Y. Jang, J. Na, S. Jeong, and J. Kang, "Energy-Efficient Task Offloading for Vehicular Edge Computing: Joint Optimization of Offloading and Bit Allocation," in Proc. of the 91st Vehicular Technology Conference (VTC2020-Spring), pp. 1-5, May 2020.
  15. J. Sun, Q. Gu, T. Zheng, P. Dong, A. Valera, and Y. Qin, "Joint Optimization of Computation Offloading and Task Scheduling in Vehicular Edge Computing Networks," IEEE Access, vol. 8, pp. 10466-10477, Jan. 2020. https://doi.org/10.1109/access.2020.2965620
  16. S. Choo, J. Kim, and S. Pack, "Optimal Task Offloading and Resource Allocation in Software-Defined Vehicular Edge Computing," in Proc. of International Conference on Information and Communication Technology Convergence (ICTC), pp. 251-256, Oct. 2018.
  17. S. Zaman, T. Maqsood, M. Ali, K. Bilal, S. Madani, and A. Khan, "A load balanced task scheduling heuristic for large-scale computing systems," Computer Systems Science and Engineering, vol. 34, no. 2, pp. 79-90, 2019.
  18. M. Okhovvat and M. Kangavari, "TSLBS: A time-sensitive and load balanced scheduling approach to wireless sensor actor networks," Computer Systems Science and Engineering, vol. 34, no. 1, pp. 13-21,2019.
  19. Y. Sun, X. Guo, J. Song, S. Zhou, Z. Jiang, X. Liu, and Z. Niu, "Adaptive Learning-Based Task Offloading for Vehicular Edge Computing Systems," IEEE Transactions on Vehicular Technology, vol. 68, no. 4, pp. 3061-3074, Apr. 2019. https://doi.org/10.1109/tvt.2019.2895593
  20. K. Zheng, H. Meng, P. Chatzimisios, L. Lei, and X. Shen, "An SMDP-Based Resource Allocation in Vehicular Cloud Computing Systems," IEEE Transactions on Industrial Electronics, vol. 62, no. 12, pp. 7920-7928, Dec. 2015. https://doi.org/10.1109/TIE.2015.2482119
  21. C. Lin, D. Deng, and C. Yao, "Resource Allocation in Vehicular Cloud Computing Systems with Heterogeneous Vehicles and Roadside Units," IEEE Internet of Things Journal, vol. 5, no. 5, pp. 3692-3700, Oct. 2018. https://doi.org/10.1109/jiot.2017.2690961
  22. C. Zhu, J. Tao, G. Pastor, Y. Xiao, Y. Ji, Q. Zhou, Y. Li, and A. Yla-Jaaski, "Folo: Latency and Quality Optimized Task Allocation in Vehicular Fog Computing," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4150-4161, June 2019. https://doi.org/10.1109/jiot.2018.2875520
  23. X. Huang, R. Yu, J. Liu, and L. Shu, "Parked Vehicle Edge Computing: Exploiting Opportunistic Resources for Distributed Mobile Applications," IEEE Access, vol. 6, pp. 66649-66663, Nov. 2018. https://doi.org/10.1109/access.2018.2879578
  24. J. Sun, Q. Gu, T. Zheng, P. Dong, A. Valera, and Y. Qin, "Joint Optimization of Computation Offloading and Task Scheduling in Vehicular Edge Computing Networks," IEEE Access, vol. 8, pp. 10466-10477, Nov. 2020. https://doi.org/10.1109/access.2020.2965620
  25. K. Zhang, Y. Mao, S. Leng, Y. He, and Y. Zhang, "Mobile-Edge Computing for Vehicular Networks: A Promising Network Paradigm with Predictive Off-Loading," IEEE Vehicular Technology Magazine, vol. 12, no. 2, pp. 36-44, June 2017. https://doi.org/10.1109/MVT.2017.2668838
  26. Y. Dai, D. Xu, S. Maharjan, and Y. Zhang, "Joint Load Balancing and Offloading in Vehicular Edge Computing and Networks," IEEE Internet of Things Journal, vol. 6, no. 3, pp. 4377-4387, June 2019. https://doi.org/10.1109/jiot.2018.2876298
  27. D. Tang, X. Zhang, and X. Tao, "Delay-Optimal Temporal-Spatial Computation Offloading Schemes for Vehicular Edge Computing Systems," in Proc. of IEEE Wireless Communications and Networking Conference (WCNC), pp. 1-6, Oct. 2019.
  28. Y. Guo, F. Liu, N. Xiao, and Z. Chen, "Task-Based Resource Allocation Bid in Edge Computing Micro Datacenter," Computers, Materials and Continua, vol. 61, no. 2, pp. 777-792, 2019.
  29. Y. Liu, M. J. Lee, and Y. Zheng, "Adaptive Multi-Resource Allocation for Cloudlet-Based Mobile Cloud Computing System," IEEE Transactions on Mobile Computing, vol. 15, no. 10, pp. 2398-2410, Oct. 2016. https://doi.org/10.1109/TMC.2015.2504091
  30. J. Klaimi, S. Senouci, and M. Messous, "Theoretical Game Approach for Mobile Users Resource Management in a Vehicular Fog Computing Environment," in Proc. of the 14th International Wireless Communications & Mobile Computing Conference (IWCMC), pp. 452-457, Aug. 2018.
  31. J. Feng, Z. Liu, C. Wu, and Y. Ji, "AVE: Autonomous Vehicular Edge Computing Framework with ACO-Based Scheduling," IEEE Transactions on Vehicular Technology, vol. 66, no. 12, pp. 10660-10675, Dec. 2017. https://doi.org/10.1109/tvt.2017.2714704
  32. C. Tham and R. Chattopadhyay, "A load balancing scheme for sensing and analytics on a mobile edge computing network," in Proc. of IEEE 18th International Symposium on A World of Wireless, Mobile and Multimedia Networks (WoWMoM), pp. 1-9, July 2017.
  33. Y. Wei, Z. Wang, D. Guo, and F. R. Yu, "Deep Q-Learning Based Computation Offloading Strategy for Mobile Edge Computing," Computers, Materials and Continua, vol. 59, no. 1, pp. 89-104, 2019.
  34. Y. Li, X. Wang, W. Fang, F. Xue, H. Jin, Y. Zhang, and X. Li, "A Distributed ADMM Approach for Collaborative Regression Learning in Edge Computing," Computers, Materials and Continua, vol. 59, no. 2, pp. 493-508, 2019.
  35. L. N. Ribeiro, S. Schwarz, M. Rupp, and A. L. F. de Almeida, "Energy Efficiency of mmWave Massive MIMO Precoding with Low-Resolution DACs," IEEE Journal of Selected Topics in Signal Processing, vol. 12, no. 2, pp. 298-312, May 2018. https://doi.org/10.1109/jstsp.2018.2824762
  36. S. Schwarz, M. Rupp, and S. Wesemann, "Grassmannian Product Codebooks for Limited Feedback Massive MIMO With Two-Tier Precoding," IEEE Journal of Selected Topics in Signal Processing, vol. 13, no. 5, pp. 1119-1135, Sep. 2019. https://doi.org/10.1109/jstsp.2019.2930890
  37. R. Amin, M. Reisslein, and N. Shah, "Hybrid SDN Networks: A Survey of Existing Approaches," IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 3259-3306, May 2018.
  38. I. Farris, T. Taleb, Y. Khettab, and J. Song, "A Survey on Emerging SDN and NFV Security Mechanisms for IoT Systems," IEEE Communications Surveys & Tutorials, vol. 21, no. 1, pp. 812-837, Aug. 2019.
  39. T. Ma, S. Pang, W. Zhang, and S. Hao, "Virtual Machine Based on Genetic Algorithm Used in Time and Power Oriented Cloud Computing Task Scheduling," Intelligent Automation and Soft Computing, vol. 25, no. 3, pp. 605-613, 2019.
  40. P. Shu and Q. Du, "Group Behavior-Based Collaborative Caching for Mobile Edge Computing," in Proc. of IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China, pp. 2441-2447, May 2020.
  41. Korea Expressway Corporation. [Online]. Available: http://data.ex.co.kr/portal/traffic/trafficVds#