DOI QR코드

DOI QR Code

Deep Learning Based Security Model for Cloud based Task Scheduling

  • Devi, Karuppiah (Department of CSE, SRM Valliammai Engineering College) ;
  • Paulraj, D. (Department of CSE, RMD Engineering College) ;
  • Muthusenthil, Balasubramanian (Department of CSE, SRM Valliammai Engineering College)
  • 투고 : 2020.04.30
  • 심사 : 2020.07.30
  • 발행 : 2020.09.30

초록

Scheduling plays a dynamic role in cloud computing in generating as well as in efficient distribution of the resources of each task. The principle goal of scheduling is to limit resource starvation and to guarantee fairness among the parties using the resources. The demand for resources fluctuates dynamically hence the prearranging of resources is a challenging task. Many task-scheduling approaches have been used in the cloud-computing environment. Security in cloud computing environment is one of the core issue in distributed computing. We have designed a deep learning-based security model for scheduling tasks in cloud computing and it has been implemented using CloudSim 3.0 simulator written in Java and verification of the results from different perspectives, such as response time with and without security factors, makespan, cost, CPU utilization, I/O utilization, Memory utilization, and execution time is compared with Round Robin (RR) and Waited Round Robin (WRR) algorithms.

키워드

1. Introduction

Scheduling provides a significant contribution in cloud computing to quickly and easily assign resources for each mission. Project scheduling in the cloud computing environment is used to evaluate appropriate resources for execution of assignments by considering some constraint and parameter. The scheduler, user and virtual machine (VM) clusters are essential components needed to schedule the work in cloud. A user hands over the tasks to schedulers in the cloud environment. The scheduler organizes tasks as per the task requirements, then delivers tasks to VM and at last the user gets the final output from the scheduler. The scheduling can be distinguished according to the execution time as static and dynamic planning.

Scheduling of cloud computing can be divided into three phases

• Resource identification and filtering – The data center broker decides and gathers the status information related to the resources available on the network environment.

• Selection of resources – Resources are chosen according to agreed mission and resource parameters

• Task submission –Submit the tasks in the chosen resources.

The principle goal of the scheduling is to reduce competition for resources and to ensure that parties use resources equally. Scheduling tackle the question for which most of the essential resources will be allocated.

The demand for resources fluctuates dynamically so scheduling of resources is a difficult task. A task scheduler in cloud computing has to fulfil cloud customers with the agreed quality of service (QoS) and improve the income of cloud providers. It is a major difficulty to dispatch effectively and reasonably the tasks of the users for specific sources following the QoS necessities of each cloud computing center and users. In critical application, many scheduling strategies are being used by the master nodes to efficiently distribute its tasks. As the number of cloud users increases, the scheduling becomes very difficult and a suitable scheduling algorithm is required. Appropriate scheduling algorithms are needed to undeniably monitor the various problems and limitations associated with different scheduling techniques.

Throughout the cloud setting various forms of task scheduling have been used. Those include QoS-based scheduling, cost based scheduling, cluster based scheduling, priority scheduling, fuzzy based scheduling, ant colony-based scheduling, particle swarm optimizations algorithm-based job scheduling, genetic algorithm-based scheduling and multiple-processor-based scheduling [1]. In a non-preemptive method, the two most important scheduling concepts are round Robin and weighted round robin strategies.

The round robin algorithm assigns the next VM task in the queue regardless of the load on the VM. The Round Robin strategy does not take into account the resources, priorities and length of tasks. Higher priority and longer activities end up with higher response times. The weighted round robin considers VMs' resource capabilities and assigns increased number of tasks to larger capacity VMs based on the weight given to each VM. But when selecting the appropriate VM it fails to consider the duration of the tasks. These two algorithms are implemented for comparative analysis.

Cloud computing security is one of the main cloud problems. The security-aware scheduling is essential to run the takes in cloud-based customer security specification.

The main findings are as follows:

1.Designed a security model to schedule the tasks in cloud computing.

2.Proposed the deep reinforcement learning for task scheduling in cloud computing.

3.Implementation of the model adopted using CloudSim 3.0 simulator written in Java and verification of the results is also done.

4.In-depth performance analysis from different perspectives, such as response time with and without security factors, makespan, communication cost, CPU utilization, I/O utilization, storage overhead, and execution time.

The road-map of the paper is as follows. In section 2, we review the related work. In Section 3, the system model works as per the following stages.

Stage 1: This stage consists of various tasks and allocate them to the queuing system by considering the execution time and deadline requirements.

Stage 2: Concern with security level classification based on the task load, number of resources required and security requirements. Hence all the tasks can be assigned to a respective security level servers.

Stage 3: Assigning tasks from security level servers into VMs by ensuring proper security level. The output of the system is a n x n matrix of total cost.

In sections 4 model implementation, performance evaluation is done that demonstrates the effectiveness of our model. Finally, we quoted conclusion remarks.

2. Related Work

Guo et al. [2] developed the task scheduling for cloud management system and express a task schedule model that minimizes and resolves the problem through a PSO. They have examined and measured this method on the basis of particle swarm convergence, mutation and local search algorithm. Experimental tests show that the PSO algorithm finds the optimal solution and converges faster than other approaches in major tasks. But they have not considered the energy efficiency and service availability.

Gomathi et al. [3] proposed a hybrid PSO-based task scheduling algorithm, which improves PSO, decreases average running time, improves the usage of resources and provides users with sufficient resources.The experimental results shows that HPSO based task scheduling can attain better load balancing as compared to PSO based scheduling However, this approach does not contribute to broad-based optimization.

Alkayal et al.[4] designed a task scheduling based on a new grading methodology using the PSO algorithm. It tests the three aim functions: processing ECT, TEC, and VM. In MOPSO task scheduling the results of the grading techniques are used to determine the best virtual machine for each job. The tasks in this technique were designed for the VMs to reduce waiting times and boost the system performance.

Wu et al.[5] proposed an algorithm for the QoS task scheduling. In this algorithm, the user's right, the task length ,the expectation and the pending waiting time are combined to determine the goals and prioritize the tasks in consideration. The experimental results indicate that the algorithm achieves good efficiency and load balance through QoS driving driving preference and execution.

Ali et al. [6] suggested a clustered algorithm for software-based cloud activities. This algorithm incorporates various task attributes such as user category, task importance, task size, and task latency to calculate the task priority. The experimental results indicate that the GTS algorithm provides minimum run time for all tasks and minimum latency for various tasks is obtained compared to both Min-Min and TS algorithms.

Agarwal et al.[7] suggested cloud priority scheduling frameworks. This model is designed to reduce the execution time of tasks. VMs are prioritized in conjunction with a million instructions per second. Activities with the lowest priority are scheduled for VMs with the highest priority. The algorithm results are compared with the first fit (FF) and round robin (RR) algorithms, which have higher GPA performance than FF and RR.

Mehranzadeh et al. [8] proposed fuzzy logic for task scheduling. This scheduling method is capable of scheduling data center VMs. By comparing it with the two scheduling techniques of FCFS and RR, the results show the effectiveness of the algorithm. This algorithm affects outside priorities when several jobs are scheduled, and the creation of rules is a very difficult task for fuzzy logic as it affects time for calculation

Zhang et al.[9] developed a task scheduling algorithm in cloud computing focused on fuzzy clustering with a parallel approach. Their research focuses on parallel scheduling, particularly in the oil and seismic scanning industries, with particular emphasis on computing with high presentations which is necessary for huge data processing. The main disadvantage of this clustering method is that the cluster descriptor has no interpretability

Niazmand et al. [10] provided an enhanced ant colony optimization Algorithm to arrange grid computing tasks. The JSWA algorithms measure parameters like latency, requests, reliability, costs and recognition time. To reduce overall execution costs, Pandey et al. [11] developed a scheduling approach using PSO. They compared the PSO and the BRS algorithms, indicating that compared to the BRS PSO saves three times as much as the cost. However, data transfers between one compute node to the next take more and more time to transmit and store.

Feng et al. [12] proposed a task scheduling methodology using PSO algorithm and the Pareto dominance theory. This theory determines optimized schedulers for the multi-objective optimization of resources based on the registration of the resource, total execution time and QoS for each task. This method only works for basic tasks without a convergence principle for problem solving.

Juan et al.[13] suggested a cloud-based work scheduling technique using an improved algorithm based on PSO. They developed a cost vector model to measure preparation costs and a solution based on input tasks and QoS parameters.Even though the method has lot of complexity, provides an effective improvement in the scheduling.

Alkayal et al.[14] used the PSO algorithm to build a new ranking technique in multi-object-based scheduling of tasks. Here the tasks were designed for the VMs in order to minimize waiting time and increasing the device throughput. Dordaie et al.[15] suggested a work scheduling algorithm in the cloud using hybrid PSO and hill-climbing algorithm to solve the difficulties. This approach was well designed, but it requires more time to accomplish the task. Likewise, Verma and Kaushal[16] have clarified the multi-objective hybrid PSO algorithm for scientific workflow scheduling.

Gao et al. [17] introduced multi-objective function in job shop scheduling. In order to reduce execution time and preparation costs, they used the versatile job scheduling method.Keshanchi et al. [18] have developed an enhanced genetic algorithm and priority queues to plan tasks in the cloud environment. Rare selection elitism technique was used to avoid early convergence and randomly generated graph statistical analyzes were performed.

Shishido et al.[19] suggested a technique for scheduling using genetic algorithms. Here, they measured the scheduling efficiency using a security algorithm and the cost-conscious programming workflow. Su et al. [20] described a cost efficiency-based task scheduling methodology for the execution of large cloud programs.They put forward two heuristic strategies for scheduling the task in the cloud environment. The first technique, based on the concept of Pareto dominance during runtime, maps the tasks for most economical VMs. The second method eliminates non-critical activities' monetary costs.

Karatza [21] proposed a methodology for gang scheduling depend on the clustering systems. Gang schedulation is a process that deals with the planning strategy of parallel and space-sharing systems. The migration strategy is used to reduce the disruption in the schedule caused by the planned employment of gangs. Two homogeneous clusters have been replicated to determine the presentation of specific workloads. The effect of transition on the service time of parallel tasks was addressed. If this algorithm is used for task planning, it shows that the fusion or splitting decision is incompetent to correct.

3. Deep reinforcement learning-based security

3.1 User workload model

Fig. 1. Shows the security model for task scheduling in cloud computing. This model composed of four models:

E1KOBZ_2020_v14n9_3663_f0001.png 이미지

Fig. 1. Security model for task scheduling in cloud computing

1. User workload model

2. Security classifier model

3. Deep reinforcement learning-based task scheduling model

4. Price model with security

3.1 User workload model

In the cloud environment, there are n tasks to be processed and x number of VMs available. Each VM is associated with two parameters VMCPU and VMMEM. Each task is associated with 6 parameters TCPU, TMEM, TET, TDT, TIC and TS. Table 1 gives the notations and its descriptions used in the model.

Table 1. Notations and their descriptions

E1KOBZ_2020_v14n9_3663_t0001.png 이미지

Algorithm

Input: Set of tasks T= (t1, t2,……tn) with deadline time TDT= (t1DT, t2DT ….., tnDT) and instruction count of task TIC=(t1IC, t2IC,…….tnIC)

Output: Task loaded into the queue or rejected

1: Initialize CPI = constant value and clock cycle CT= constant value

2. for i=1 to n do

3: Compute execution time ET = CPI * tiIC * CT

4: if ET + Tstart > tiDT

5: Place task inside the task queue

6: else

7: Reject the task

8: end

3.2 Security classification model

The task submitted by the user contains CPU usage, task size and memory to identify the resource demand of the task and security level. Here tasks are classified based on the usage of CPU, memory and I/O into three categories such as CPU intensive, memory-intensive and I/O intensive. The capacity the system is as recognized SCPU, SI/O, and SMEM. Then, Task Category(TC) measures the ratios within the program for each task of CPU, I/O, and memory. For each task Ti, calculate these ratios (R=T/S) RCPU, RI/O, and RMEM by its parameters TCPU, TI/O, and TMEM and SCPU, SI/O, and SMEM. The largest of these three ratios is regarded as the task category.

TC= max (RCPU, RI/O, RMEM)       (1)

Finally, all tasks are divided into three queues CPUTC, I/OTC, and MEMTC of CPU intensive, I/O intensive, and memory-intensive by the task category TC.

In the cloud data center CSP group the VMs into three levels based on the usage of CPU, memory and I/O to provide the security to the tasks. For instance, some tasks may require less security that may be loaded in level 1. If the task requires medium-level security, it is loaded into level 2 and the high secure tasks are loaded into level 3.

Level 1 contains only CPU intensive tasks. The operational mode of the encryption algorithm is offline and the RSA algorithm is used with the key size of 1024 bits for both encryption and decryption.

Level 2 contains I/O intensive tasks. Operational modes are both offline and online and Advance cryptography protocol is used to provide promising security. RSA algorithm is used for encryption and decryption using the key size of 2048 bits.

Level 3 contains Memory intensive tasks. The operational mode of the algorithm is only offline. RSA algorithm of key size 2048 bits and elliptic curve signature algorithm I of key size 164 bits used for encryption and decryption.

CPUTC = {U1, U2,...,..., Ui}

I/OTC = {Ui+1, Ui+2,...,..., Uj}

MEMTC = {Ui+j+1, Ui+j+2,......, Un-i-j}

Here, all the resources sort rather than classify, due to the amount and dynamism

Proposed Algorithm for task classification.

Input: Set of taks (T1, T2,.., Tn) with TCPU, TI/O, and TMEM and SCPU, SI/O, and SMEM

Output: Task queues CPUTC, I/OTC, MEMTC

1: for i=1 to n do

2: Calculate the task ratio of CPU, I/O and

memory

4: RCPU=TCPU/SCPU

5: RI/O=TCPU/SCPU

6: RMEM=TMEM/SMEM

7: if (max(RCPU, RI/O, RMEM) == RCPU then

8: TC→ CPUTC

9: end

10: else if (max(RCPU, RI/O, RMEM) == RI/O then

11: TC→ I/OTC

12 end

13: else if (max(RCPU, RI/O, RMEM) == RMEM then

14: TC→ MEMTC

15: end

16: end

17: Sort the all three queues in ascending order

Based on the type of task the tasks are classified into three security levels in Fig. 2 and Fig. 3.

E1KOBZ_2020_v14n9_3663_f0002.png 이미지

Fig. 2. Security level classifier

E1KOBZ_2020_v14n9_3663_f0003.png 이미지

Fig. 3. Task allocation

3.3 Deep Mapping Algorithm for VM allocation

The Deep Mapping reinforcement algorithm is used to map the tasks to corresponding virtual machines. The algorithm for the training of deep queueing networks is updated using the expert replay and target network, so that a large neural network with a high converging speed can be created.

Experience replay:

The internal loop of the algorithm stores the random task tr in memory ∆ and uses the queue-learning algorithm to match the randomly picked experience from the collected samples. This is the best way to learn regular queue in many respects. The efficiency of task size is higher, since each step is highly replayed with many weight updates many times. With the help of randomly selected experience, the learning experience provides higher efficiency than the sequential experience. The randomly selected task experience make our procedure stable.

Target network:

A neural network model used in deep queue learning to generate target VM IDs. The target VM id has structured with different parameters. The parameter of target VMs are CPU, I/O and memory response time. In every γ step the parameters of the target VM IDs are evaluated from the evaluation network. Whenever the mismatch happens between standard queue learning and deep learning network eliminates the divergence.

Deep mapping into VMs:

It is a puzzle, which is aimed to explore new VMs without specifying time to respond. This method works with a greedy method that reduces the value (which we prefer to have a greater value) to select an altered action; otherwise, choose the highest answer time and reduce the factor to a minimum value in the next cycle. This approach provides good response time VM IDs

Deep Mapping Algorithm

1:Initialize historical memory dataset ∆ to capability Ω

2:Initialize DL time δ test-deadline Q

3:for instance = 1, E do

4: Set the initial cloud environment

5: Start sequence s1={x1}

6: for i = 1, T do

7: With probability e, select random task ti

8: Otherwise, choose ti=max Q(si , t, δ)

9: Execute ti and observe next xi+1

10: if reject == 1 then

11: Run DQN again to get new task ti

12: if ti≠ ti then

13: t→ ti

14: end

15: end

16: Set si+1 = si , ti, xi+1

17: Store VM(si+1, ti, VMi , st) in ∆

18: targetj= {VMj if episode terminates at step j+1

19 VMj +ζ max Q(sj+1,t’, δ’), otherwise)}

20: Every step train convolution network deeply ξ

21: Every step copy Q to Q’

22: end

23: end

24: return all task VMs VM IDs, Ts

3.4 Deep Reinforcement learning Price model

In this section, we describe the price estimation model for Iaas in public/private clouds. Each virtual machine in the cloud can be either active or inactive, so the price of the virtual machine is considered by the both active and inactive.

\(C^a_{vm}\)- denotes the cost of active virtual machine

\(C^{ia}_{vm}\)- denotes the cost inactive virtual machine

\(C^a_{vm}\)- includes the sum of networking cost, storage cost and computational cost.

\(C^a_{vm} = C^a_{nw} + C^a_{st} + C^a_{Comp}\)       (2)

\(C^a_{nw}\) is calculated by cost of virtual machine and the utilization time of the virtual machine.

\(C^a_{nw} = P^T_{vm} * usage \ time \ t\)       (3)

\(C^a_{st}\) is determined by the virtual machine usage time and the storage function which involves the total number of I/O activities, CPU utilization and memory usage for a given time.

\(C^a_{st} = P^T_{vol} * t +P^T_{io}*t + P^T_{CPU} * t\)       (4)

\(C^a_{Comp}\) is determined by the NW-Number of workloads under virtual machine, CW-complexity of work load, SR-security requirements and MR-monitoring requirements.

\(P^a_{comp} = ((NW*CW) *SR) + ((NW*CW)*MR)\)       (5)

\(C^a_{comp} = P^a_{comp}*t\)       (6)

Monitoring requirements are set to the range between 1 -5. These values are set based on the shift. Morning shift -1, Night shift -2, Evening shift -3, and general shift 4 and 24/7 shift.

Workload values are set between the scales 1-2.200MB task scale value-1 and the 1000 MB task scale 2.

The total cost value is determined by the following formula

\(C=C^a_{vm} +C^{ia}_{vm}\)       (7)

4. Experimental results and discussion

4.1 Experimental setup

In this proposed work, we have adopted the CloudSim 3.0 simulator written in Java. Cloud simulator is used to create cloud data centers (DC), VM, computational resources and the management of cloud systems such as scheduling and provisioning of VM. Table 2 gives the hardware requirements of the proposed work. Table 3 gives the software requirements of the proposed work.

Table 2. Hardware requirements

E1KOBZ_2020_v14n9_3663_t0002.png 이미지

Table 3. Software requirements

E1KOBZ_2020_v14n9_3663_t0003.png 이미지

4.2 Performance metrics

The main objective of our proposed model is to obtain efficiency in terms of execution time, make-span, task scheduling and load balancing along with the security factors.

• Efficiency is reflected through the execution time of the tasks in the model

• Scheduling efficiency is reflected through the deadline verification and response time.

• Load balancing is efficiency is achieved through the utilization of the CPU, memory and IO resources.

• Tasks scheduling efficiency is achieved through the deep learning model.

• Security factors are implemented through the classification of the level discussed earlier.

Response time:

The basic experiment is verified by the response time- they take arrival rates in terms of task size were 1500 - 3000 KB. The results are shown in Fig. 4. It shows the response time of all the tasks size with security and without security. Our proposed model shows a greater impact. Comparison of execution time with and without security factors are tabulated. Even though the security model takes a little higher response time compared with normal repose time, it will reflect a greater impact on overall system performance.

E1KOBZ_2020_v14n9_3663_f0004.png 이미지

Fig. 4. Response time of all the tasks size with security and without security.

Response time=the time interval between the task arrival and the completion

TRe = Tc- Ta       (8)

TRe- task response time

Tc – task execution time

Ta- task arrival time

Tc=(TDLT + Ttransfer + TExe + TSecurity)       (9)

TDLT - Deep learning algorithm execution time of the task

Ttransfer – Time taken to transfer the task

TExe – Actual execution of the task

TSecurity –Time to run the security algorithms.

Execution time of the task is measured using the equation number 9.The results are compared with the RR and WRR Algoithms. The compartive analysis is given in the Fig. 5.

E1KOBZ_2020_v14n9_3663_f0005.png 이미지

Fig. 5. Execution time Comparison

Makespan:

Makespan is used to determine the optimal completion period by comparing the last task's completion time when all tasks are scheduled. Here Tij defines the time that resource ri needs to complete task ti.

Makespan= max{Tij for i tasks mapped to j VM}       (10)

In Fig. 6 makespan time is compared to Round Robin (RR), Weighted Round Robin (WRR) and proposed approach.Our proposed approach takes lesser the makespan compared to other two methods.

E1KOBZ_2020_v14n9_3663_f0006.png 이미지

Fig. 6. Makespan Comparison.

Load balancing:

The load balancer selects each of the VMs that completes all of their assigned tasks, then chooses the heavily loaded VM from the list and calculates the completion time of those tasks in the heavily loaded VM and the total loaded / idle. If the minimum loaded VM can complete any of the jobs present in the severely loaded VM in the shortest possible time, then that job will be migrated to the minimum loaded VM. This load balancing factor is measured through the utilization of CPU, memory and I/O resources in the private cloud. The results are shown in Table 4 and Fig. 7

Table 4. Resource Utilization

E1KOBZ_2020_v14n9_3663_t0004.png 이미지

E1KOBZ_2020_v14n9_3663_f0007.png 이미지

Fig. 7. Utilization of CPU, memory and I/O resources comparison

Cost

Cost of the task execution is based the number of virtual machines used for execution. The cost computation is performed based on the equation number 7 mentioned in the previous section. Fig. 8 shows the comparative analysis of cost involved in task execution. The proposed method takes the less cost compared to the RR and WRR.

E1KOBZ_2020_v14n9_3663_f0008.png 이미지

Fig. 8. cost comparison

5. Conclusion

This paper presented a new approach to design a deep learning-based security model for task scheduling in cloud computing. To achieve this, we implemented using CloudSim 3.0 simulator written in Java and verification of the results from different perspectives, such as response time with and without security factors, makespan, cost, CPU utilization, I/O utilization, memory utilization, and execution time is also verified with RR, WRR and our model is done. The experiments show that our model outperforms the ad-hoc heuristics.

참고문헌

  1. A. R. Aruna rani, D. Manjula, and V. Sugumaran, "Task scheduling techniques in cloud computing: A literature survey," Futur. Gener. Comput. Syst., vol. 91, pp. 407-415, 2019 https://doi.org/10.1016/j.future.2018.09.014
  2. L. Guo, S. Zhao, S. Shen, and C. Jiang, "Task scheduling optimization in cloud computing based on heuristic Algorithm," J. Networks, vol. 7, no. 3, pp. 547-553, 2012.
  3. B. Gomathi and K. Krishnasamy, "Task scheduling algorithm based on Hybrid Particle Swarm Optimization in the cloud computing environment," J. Theor. Appl. Inf. Technol., vol. 55, no. 1, pp. 33-38, 2013.
  4. E. S. Alkayal and N. R. Jennings, "Efficient Task Scheduling Multi-Objective Particle Swarm Optimization in Cloud Computing," in Proc. of 41st Conf. Local Comput. Networks Workshops, pp. 17-24, 2016.
  5. X. Wu, M. Deng, R. Zhang, B. Zeng, and S. Zhou, "A task scheduling algorithm based on QoS-driven in Cloud Computing," Procedia Comput. Sci., vol. 17, pp. 1162-1169, 2013. https://doi.org/10.1016/j.procs.2013.05.148
  6. H. Gamal El-Din Hassan Ali, I. A. Saroit, and A. M. Kotb, "Grouped tasks scheduling algorithm based on QoS in a cloud computing network," Egypt. Informatics J., vol. 18, no. 1, pp. 11-19, 2017. https://doi.org/10.1016/j.eij.2016.07.002
  7. D. A. Agarwal and S. Jain, "Efficient Optimal Algorithm of Task Scheduling in Cloud Computing Environment," Int. J. Comput. Trends Technol., vol. 9, no. 7, pp. 344-349, 2014. https://doi.org/10.14445/22312803/IJCTT-V9P163
  8. A. Mehranzadeh and S. Mohsen Hashemi, "A Novel-Scheduling Algorithm for Cloud Computing based on Fuzzy Logic," Int. J. Appl. Inf. Syst., vol. 5, no. 7, pp. 28-31, 2013. https://doi.org/10.5120/ijais13-450939
  9. Q. Zhang, H. Liang, and Y. Xing, "A Parallel Task Scheduling Algorithm Based on Fuzzy Clustering in Cloud Computing Environment," Int. J. Mach. Learn. Comput., vol. 4, no. 5, pp. 437-444, 2014. https://doi.org/10.7763/IJMLC.2014.V4.451
  10. E. Niazmand, J. Bayrampoor, A. G. Delavar, and A. R. K. Boroujeni, "Jswa An Improved Algorithm For Grid Workflow Scheduling Using Ant Colony Optimization," J. Math. Comput. Sci., vol. 6, no. 4, pp. 315-331, 2013. https://doi.org/10.22436/jmcs.06.04.08
  11. S. Pandey, L. Wu, S. M. Guru, and R. Buyya, "A particle s warm optimization-based heuristic for scheduling workflow applications in cloud computing environments," in Proc. of Proc.- Int. Conf. Adv. Inf. Netw. Appl. AINA, pp. 400-407, 2010.
  12. M. Feng, X. Wang, Y. Zhang, and J. Li, "Multi-objective particle swarm optimization for resource allocation in cloud computing," in Proc. of 2012 IEEE 2nd Int. Conf. Cloud Comput. Intell. Syst. IEEE CCIS 2012, vol. 3, pp. 1161-1165, 2012.
  13. J. Wang, F. Li, and A. Chen, "An improved PSO based task scheduling algorithm for a cloud storage system," Adv. Inf. Sci. Serv. Sci., vol. 4, no. 18, pp. 465-471, 2012.
  14. E. S. Alkayal, N. R. Jennings, and M. F. Abulkhair, "Efficient Task Scheduling Multi-Objective Particle Swarm Optimization in Cloud Computing," in Proc. of Conf. Local Comput. Networks, LCN, pp. 17-24, 2016.
  15. N. Dordaie and N. J. Navimipour, "A hybrid particle swarm optimization and hill climbing algorithm for task scheduling in the cloud environments," ICT Express, vol. 4, no. 4, pp. 199-202, 2018. https://doi.org/10.1016/j.icte.2017.08.001
  16. A. Verma and S. Kaushal, "A hybrid multi-objective Particle Swarm Optimization for scientific workflow scheduling," Parallel Comput., vol. 62, pp. 1-19, 2017. https://doi.org/10.1016/j.parco.2017.01.002
  17. J. Gao, M. Gen, L. Sun, and X. Zhao, "A hybrid of genetic algorithm and bottleneck shifting for multiobjective flexible job shop scheduling problems," Comput. Ind. Eng., vol. 53, no. 1, pp. 149-162, 2007. https://doi.org/10.1016/j.cie.2007.04.010
  18. B. Keshanchi, A. Souri, and N. J. Navimipour, "An improved genetic algorithm for task scheduling in the cloud environments using the priority queues : Formal verification , simulation , and statistical testing," J. Syst. Softw., vol. 124, pp. 1-21, 2017. https://doi.org/10.1016/j.jss.2016.07.006
  19. H. Y. Shishido, J. C. Estrella, C. F. M. Toledo, and M. S. Arantes, "Genetic-based algorithms applied to a workflow scheduling algorithm with security and deadline constraints in clouds," Comput. Electr. Eng., vol. 69, pp. 378-394, 2018. https://doi.org/10.1016/j.compeleceng.2017.12.004
  20. S. Su, J. Li, Q. Huang, X. Huang, K. Shuang, and J. Wang, "Cost-efficient task scheduling for executing large programs in the cloud," Parallel Comput., vol. 39, no. 4-5, pp. 177-188, 2013. https://doi.org/10.1016/j.parco.2013.03.002
  21. Z. C. Papazachos and H. D. Karatza, "The impact of task service time variability on gang scheduling performance in a two-cluster system," Simul. Model. Pract. Theory, vol. 17, no. 7, pp. 1276-1289, 2009. https://doi.org/10.1016/j.simpat.2009.05.002