DOI QR코드

DOI QR Code

A Hybrid Estimation of Distribution Algorithm with Differential Evolution based on Self-adaptive Strategy

  • Fan, Debin (School of Information Science and Technology, Jiujiang University) ;
  • Lee, Jaewan (Department of Information and Communication Engineering, Kunsan National University)
  • Received : 2020.09.16
  • Accepted : 2021.01.14
  • Published : 2021.02.28

Abstract

Estimation of distribution algorithm (EDA) is a popular stochastic metaheuristic algorithm. EDA has been widely utilized in various optimization problems. However, it has been shown that the diversity of the population gradually decreases during the iterations, which makes EDA easily lead to premature convergence. This article introduces a hybrid estimation of distribution algorithm (EDA) with differential evolution (DE) based on self-adaptive strategy, namely HEDADE-SA. Firstly, an alternative probability model is used in sampling to improve population diversity. Secondly, the proposed algorithm is combined with DE, and a self-adaptive strategy is adopted to improve the convergence speed of the algorithm. Finally, twenty-five benchmark problems are conducted to verify the performance of HEDADE-SA. Experimental results indicate that HEDADE-SA is a feasible and effective algorithm.

Keywords

1. Introduction

Continuous optimization problems appear in most fields of science and engineering, which have been received considerable attention. Their objective functions have the characteristics of continuity, noise, and other features. Therefore, it is rather difficult to solve by common methods; some metaheuristic algorithms like evolutionary algorithms (EAs) would be taken into account. Classical EAs, such as genetic algorithm (GA), ant colony optimization (ACO), particle swarm optimization (PSO), artificial bee colony (ABC), have been used to deal with these problems[1-3].

EDA first proposed by Mühlenbein and Paass[4], which has been successfully used in a set of academic and practical applications for optimization. For example, Liu et al.[5] present a copula-based EDA (cEDA) for flow-shop scheduling problem. Dong et al.[6] introduced a latent space-based EDA for large-scale global problems (LSGOs). Yang et al.[7] utilized distribution estimation and niching to implement a multimodal EDA for multimodal problems. For constrained problems, Gao et al.[8] proposed an enhanced EDA by extreme elitism selection technique, which indicates better performance compared with other EDAs.

EDA generates offspring based on global information while ignoring location information of the individual. Thus this method can improve the population diversity. However, it is easy to encounter the problem of premature convergence[9]. In addition, since the sample size is fixed, the calculation of probability distribution takes more time[9].

Differential evolution (DE) is another famous stochastic metaheuristics algorithm of EA, which was first introduced by Stron and Price[10] and had strong local search ability[11]. To solve the above issues, the hybridization of EDA and DE algorithms has attracted more and more attention and achieved superior performance. For example, Sun et al.[12] firstly present a combination of DE and EDA operator termed DE/EDA. Shao et al.[13] proposed a hybrid DE/EDA with adaptive incremental learning strategy. Inspired by literature[12], Dong et al.[14] present a hybrid mechanism of DE and EDA with local search strategy, named as EDA/DE-EIG. Fang et al.[15] introduced a hybridization of DE and EDA termed DE/GM, which can provide a Gaussian distribution probabilistic model of EDA and crossover/mutation operators of DE to generate offspring solutions with information fusion.

Although hybridization technique does increase the performance of the algorithm, it still cannot avoid the blindness and randomness of individual evolution. In the past few years, self-adaptive strategy in EAs has aroused great attention, and various self-adaptive strategies have been introduced. For example, Wang et al.[16] introduced a self-adaptive ensemble DE algorithm (SAEDE), in which their relevant parameters were self-adaptive and ensemble. Liu et al.[17] present a self-adaptive bare-bones DE with bi-mutation strategy (SMGBDE) to solve the blindness of mutation strategy. Wang et al.[18] introduced a method combined by self-adaptive mutation DE and PSO (DEPSO). These works indicate that self-adaptive strategy can effectively reduce the blindness and randomness of individual evolution and improve the convergence speed of the algorithm.

Inspired by the above considerations, we design a hybrid EDA with DE based on self-adaptive strategy (HEDADE-SA) in this article. The proposed HEDADE-SA utilizes the self-adaptive strategy to adjust the proportion of EDA and DE operators. In the early stage of the algorithm, HEDADE-SA can use more EDA operator for exploring. In the later stage, HEDADE-SA can use more DE operator to exploit. Moreover, an alternative probability model for sampling is used in HEDADE-SA. Twenty-five test problems of CEC2005 are used to validate the performance of HEDADE-SA.

The remaining article is structured as follows: Section 2 shows a description of EDA and DE. Then, section 3 is devoted to HEDADE-SA. In Section 4, we conduct an experimental analysis of HEDADE-SA. Finally, Section 5 sums up this article.

2. Description of Algorithms

2.1 Estimation of Distribution Algorithm

EDA[4] is a kind of stochastic metaheuristics algorithm. Compared with classical GA, EDA has no crossover or mutation. It uses global statistical information to learn and sample, and then builds a probabilistic model and obtains a promising solution.

The basic steps of EDA are given as follows.

Step 1: Population initialization.

Step 2: Calculating and evaluating the fitness of individuals.

Step 3: Selecting m elite individuals from the population. Step 4: Generating the probabilistic model according to the elite population.

Step 5: Sampling to create a new population by the probabilistic model from step 4.

Step 6: If algorithm met the termination criterion, output the global optimum; otherwise, goto Step 2.

The flowcharts of GA and EDA are as shown in Figure 1.

OTJBCD_2021_v22n1_1_f0001.png 이미지

(Figure 1) The flowchats of GA and EDA

2.2 Differential Evolution

DE[10] is another kind of population-based random search method, which was proposed in 1995.

The details of classical DE are shown as follows.

Step 1. Population Initialization.

DE usually uses a randomly generated method to initialize the population, which is shown as follows:

\(x_{i, j}(0)=x_{i, j}^{L}+\operatorname{rand}(0,1) \cdot\left(x_{i, j}^{U}-x_{i, j}^{L}\right)\)       (1)

where i = 1, 2, …, NP, j = 1, 2, …, D. \(x_{i, j}^{U}\) indicates the upper constraint, and \(x_{i, j}^{L}\) indicates the lower constraint.

Step 2. Mutation.

The frequently utilized mutation strategies are shown as follows:

DE/rand/1:

\(V_{i}(t)=X_{r_{1}}(t)+F \cdot\left(X_{r_{2}}(t)-X_{r_{3}}(t)\right)\)       (2)

DE/best/1:

\(V_{i}(t)=X_{\text {best }}(t)+F \cdot\left(X_{r_{1}}(t)-X_{r_{2}}(t)\right)\)       (3)

DE/current-to-best/1:

\(V_{i}(t)=X_{i}(t)+F \cdot\left(X_{b e s t}(t)-X_{i}(t)\right)+F \cdot\left(X_{r_{1}}(t)-X_{r_{2}}(t)\right)\)       (4)

DE/rand/2:

\(V_{i}(t)=X_{r_{1}}(t)+F \cdot\left(X_{r_{2}}(t)-X_{r_{3}}(t)\right)+F \cdot\left(X_{r_{4}}(t)-X_{r_{5}}(t)\right)\)       (5)

DE/best/2:

\(V_{i}(t)=X_{\text {best }}(t)+F \cdot\left(X_{r_{1}}(t)-X_{r_{2}}(t)\right)+F \cdot\left(X_{r_{3}}(t)-X_{r_{4}}(t)\right)\)       (6)

where t denotes the number of evolution, F is the scaling factor, r1, r2, r3, r4, r5 are different integers within the range[1, NP] and r≠ r2 ≠ r≠ r4 ≠ r≠ i, Xbest(t) denotes the best vector.

Step 3. Crossover.

DE usually uses binomial crossover to produce a trial vector Ui(t), which is shown in Eq. (7).

\(u_{i, j}(t)=\left\{\begin{array}{ll} v_{i, j}(t), & \text { if rand } \leq C R \text { or } j=j_{r a n d} \\ x_{i, j}(t), & \text { otherwise } \end{array}\right.\)       (7)

where CR∈[ 0, 1 ] denotes the crossover probability and jrand is a random selected integer in[1, D], which is ensured that not less than one element of the target vector Xi(t) can inherit from the mutation vector Vi(t).

Step 4. Selection.

DE uses the greedy strategy to select better individual according to the fitness of Ui(t) and Xi(t). The selection operation is shown in Eq. (8):

\(X_{i}(t+1)=\left\{\begin{array}{ll} U_{i}(t), & \text { if } f\left(U_{i}(t)\right) \leq f\left(X_{i}(t)\right) \\ \mathrm{X}_{i}(t), & \text { otherwise } \end{array}\right.\)       (8)

3.Hybrid Estimation of Distribution Algorithm with Differential Evolution based on Self-adaptive Strategy (HEDADE-SA)

3.1 Alternative Probability Model

The evolution of EDA is achieved by sampling according to a probability model. In continuous EDA, Gaussian distribution and Cauchy distribution are the most commonly used probability models[7].

Gaussian distribution, named as a normal distribution, random variable X of univariate Gaussian distribution denotes as \(X \sim N\left(\mu, \delta^{2}\right)\), and its density function is shown as follows:

\(f(x ; \mu, \delta)=\frac{1}{\delta \sqrt{2 \pi}} e^{-\frac{1}{2}\left(\frac{x-\mu}{\delta}\right)^{2}}\)       (9)

where μ denotes the mean of X, δ denotes the standard deviation of X.

Cauchy distribution is similar to Gaussian distribution, and its probability density function is defined in Eq. (10):

\(f\left(x ; x_{0}, \gamma\right)=\frac{1}{\pi \gamma\left[1+\left(\frac{x-x_{0}}{\gamma}\right)^{2}\right]}\)       (10)

where x0 denotes the location value and γ denotes the scale value.

The probability model is essential because its employment directly impacts the quality of offspring. However, it is quite challenging to generate a superior offspring under only one probability distribution and many works adopted more than one probability distribution[7]. In literature[7], an alternative usage method of the Gaussian and Cauchy probability model is utilized. In other words, there is a certain probability to choose Gaussian or Cauchy distribution to generate offspring. By employing this method, the algorithms can potentially obtain a promising offspring successfully. The alternative probability model is also employed in our proposal.

A new sampled individual of EDA can be generated in Eq. (11).

\(E_{i}(t)=\left\{\begin{array}{ll} \text { Gaussian }\left(\mu_{i}, \delta_{i}\right), & \text { if rand }_{i}(0,1)<0.5 \\ \text { Cauchy }\left(\mu_{i}, \delta_{i}\right), & \text { otherwise } \end{array}\right.\)       (11)

3.2 Self-adaptive Strategy

The proposed HEDADE-SA utilizes EDA to improve global exploration capability, meanwhile uses DE/best/1 to increase local search capability. Inspired by literature[17][18], in HEDADE-SA, the selection probability SPi(t) is dynamically modified in each evolution and is generated as follows:

\(S P_{i}(t)=S P_{-} L+\left(S P_{-} U-S P_{-} L\right)^{*} L R F 1_{i}(t)\)       (12)

where SP_U is the maximum value of SPi(t), and SP_L is the minimum value of SPi(t), LRF1 is a linear reduction factor (LRF) as below[17]:

\(L R F 1_{i}(t)=\frac{1-\exp \left(-\left|f\left(X_{i}(t)\right)-f\left(X_{\text {gbest }}\right)\right|\right)}{1+\exp \left(-\left|f\left(X_{i}(t)\right)-f\left(X_{\text {gbest }}\right)\right|\right)}\)        (13)

The value of LRF1 is the comparison between the fitness of current individual f(Xi) and global best individual f(Xgbest).

Therefore, the mutation individual of the proposed algorithm is generated according to Eq. (14):

\(V_{i}(t)= \begin{cases}E_{i}(t), & \text { if rand }_{i}(0,1)<S P_{i}(t) \\ X_{\text {best }}(t)+F \cdot\left(X_{r_{1}}(t)-X_{r_{2}}(t)\right), & \text { otherwise }\end{cases}\)       (14)

In the early process of HEDADE-SA, the value of SPi(t) is bigger. The probability of randi(0, 1) < SPi(t) is also very high, and the proposed algorithm can make more use of EDA for exploring. With the evolution process, the value of SPi(t) is smaller. Consequently, the probability of randi(0, 1) < SPi(t) is also very low, and the proposed approach utilizes more DE for exploiting promising solutions.

In order to avoid stagnation in the evolution process, an indicator is set to observe the update of the population. Once the following condition is reached, the value of SPi(t) will automatically reset to 0.5. Furthermore, the search space of the algorithm can be expanded to get rid of the stagnant agitation[17].

\(S P_{i}(t+1)=\left\{\begin{array}{ll} 0.5, & \text { if } \mathrm{r} \leq L \\ S P_{i}(t), \text { otherwise } \end{array}\right.\)       (15)

where r is the proportion value of updating individuals, L is a constant from 0 to 1.

In classical EDA, the number of the sample (NS) is fixed. Using a fixed value for sampling will inevitably lead to time-consuming, which is not conducive to searching for the best solution. Inspired by literature[19][20], in HEDADE-SA, NS is adaptively changed instead of taking fixed value: 

\(N S=\operatorname{round}\left(N S_{-} L+\left(N S_{-} U-N S_{-} L\right)^{*} L R F 2\right)\)       (16)

where NS_U is the maximum value of NS, NS_L is the minimum value of NS, round denotes a rounding function, and LRF2 is another LRF which is inspired by literature[21]: 

\(L R F 2=\exp \left(1-\frac{M \operatorname{ax} F E s}{M \operatorname{ax} F E s-F E s+1}\right)\)       (17)

where FEs is the number of evaluations, and MaxFEs is the maximum value of evaluations.

From Eq. (17), it is easy to see that the value of LRF2 decreases from 1 to 0. Furthermore, NS decreases from NS_U to NS_L according to Eq. (16).

3.3 The implementation of the proposed HEDADE-SA algorithm

With the above approaches, the framework of HEDADE-A can be described in Table 1.

(Table 1) The framework of HEDADE-SA

OTJBCD_2021_v22n1_1_t0001.png 이미지

OTJBCD_2021_v22n1_1_t0001.png 이미지

The time complexity of the proposed HEDADE-SA algorithm is mainly concentrated in Step 5.1. The main function of Step 5.1 is to sort NP individuals in ascending, and its time complexity is O(NP×NP). Consequently, the time complexity of HEDADE-SA is O(NP×NP×MaxFEs).

4. Experiments and Analysis

4.1 Benchmark functions and experimental setting

The algorithm is conducted on a set of twenty-five benchmark functions of CEC2005[22]. Table 2 depicts the function types, the function names, the initialization ranges, and the bias values of these functions.

(Table 2) The benchmark functions of CEC2005

OTJBCD_2021_v22n1_1_t0002.png 이미지

OTJBCD_2021_v22n1_1_t0002.png 이미지

For a fair comparison, the following parameters are the same as follows: problem dimension D is equal to 30; MaxFEs is set to 3×105; each is for 30 independent runs per function. The experiments were conducted on a pc of 64 bit Core i7-4770 3.40 GHz CPUs and 8 GB RAM. Moreover, all algorithms were implemented in Eclipse SDK 4.3.2 and executed in Ubuntu 16.04 (64bit).

4.2 Experimental study and discussion

4.2.1 Parameter study in F and CR

In HEDADE-SA, the choice of the scaling factor F and the crossover probability CR is quite sensitive. Therefore, in this section, we compared the proposed HEDADE-SA algorithm with different values of F and CR, namely HEDADE-SA with (F = 0.1, CR = 0.1), HEDADE-SA with (F = 0.1, CR = 0.9), HEDADE-SA with (F = 0.5, CR = 0.1), and HEDADE-SA with (F = 0.5, CR = 0.9). Other parameters of HEDADE-SA are same and shown as follows: NP = 1000, NS_U = 10%*NP and NS_L = 3, SP_L = 0.2 and SP_U = 0.9, and L = 0.3. Table 3 shows the experiment results for Friedman’s test. The value of average rankings is smaller, the performance of the algorithm is better. It can be seen that HEDADE-SA with (F = 0.1, CR = 0.9) performs the best performance according to Table 3. In the following experiments, HEDADE-SA adopts the above parameters.

(Table 3) Results of HEDADE-SA with different values of F and CR

OTJBCD_2021_v22n1_1_t0003.png 이미지

OTJBCD_2021_v22n1_1_t0003.png 이미지

4.2.2 Comparison of HEDADE-SA and HEDADE-SA variants

To demonstrate the effectiveness of the self-adaptive strategy, HEDADE-SA is compared with four HEDADE-SA variants, namely HEDADE-SA with DE/rand/1, DE/rand/2, DE/best/1, and DE/best/2 mutation strategies, denoted by HEDADE-SA1, HEDADE-SA2, HEDADE-SA3, and HEDADE-SA4.

For HEDADE-SA1, HEDADE-SA2, HEDADE-SA3, and HEDADE-SA4, the parameter settings are the same as follows: NP is set to 1000, SPi(t) is set to 0.5, and NS is set to 10%*NP. In Table 4, Wilcoxon's rank sum test is implemented to compare HEDADE-SA with other algorithms. Moreover, "-", "+", and "≈" represents that the performance of HEDADE-SA is better than, worse than, and similar to that of others, respectively. From Table 4, the performance of HEDADE-SA is superior to that of HEDADE-SA1, HEDADE-SA2, HEDADE-SA3, and HEDADE-SA4 on 13, 14, 12, and 11 problems, respectively, and that is similar to HEDADE-SA1, HEDADE-SA2, HEDADE-SA3, and HEDADE-SA4 on 9, 9, 10, and 11 problems, respectively. The results for Friedman’s test are shown in Table 5, which indicates that the self-adaptive strategy is effective.

(Table 4) Results of HEDADE-SA and HEDADE-SA variants

OTJBCD_2021_v22n1_1_t0004.png 이미지

OTJBCD_2021_v22n1_1_t0004.png 이미지

(Table 5) The average rankings of HEDADE-SA with HEDADE-SA variants

OTJBCD_2021_v22n1_1_t0006.png 이미지

OTJBCD_2021_v22n1_1_t0006.png 이미지

4.2.3 Comparison of HEDADE-SA and other four algorithms

In this experiment, HEDADE-SA is compared with the other four algorithms, such as UMDAc[23], DE[11], DE/EDA[12] and JDE[24]. UMDAc is a classical algorithm of continuous EDA. DE is a representative algorithm of EA. DE/EDA is a combination algorithm of DE and EDA. JDE is a famous algorithm of DE. For UMDAc, NP is set to NP = 1000, and the Sampling Rate (SR) is set to SR = 30%. For DE, NP = 100, F is set to F = 0.5, CR is set to CR = 0.9 and DE/rand/1 is selected. For DE/EDA, NP = 1000, F = 0.5, CR = 0.9, SR = 50%. For JDE, the settings of parameters follow its original literature.

Experiment results are listed in Table 6. HEDADE-SA significantly outperforms UMDAc except for f8 because UMDAc converges slowly and the solutions are also inferior. In addition, compared with HEDADE-SA, DE is worse on 11 test problems, JDE is worse on 13, and DE/EDA is worse on 18 tests. Furthermore, Table 7 displays the results of the Friedman test, and a graphical representation of the Friedman test is shown in Figure 2. The superior performance of HEDADE-SA is evidently according to Table 7 and Figure 2. Figure 3 indicates the convergence process of these algorithms on representative test functions.

(Table 6) Experimental results of HEDADE-SA and others for benchmark functions of CEC2005

OTJBCD_2021_v22n1_1_t0005.png 이미지

OTJBCD_2021_v22n1_1_t0005.png 이미지

(Table 7) The average rankings of HEDADE-SA and other four algorithms

OTJBCD_2021_v22n1_1_t0007.png 이미지

OTJBCD_2021_v22n1_1_t0007.png 이미지

OTJBCD_2021_v22n1_1_f0002.png 이미지

OTJBCD_2021_v22n1_1_f0002.png 이미지

(Figure 2) The graphical representation of the Friedman test of the algorithms

OTJBCD_2021_v22n1_1_f0003.png 이미지

OTJBCD_2021_v22n1_1_f0003.png 이미지

(Figure 3) Convergence curve of the algorithms on representative test functions

5. Conclusion

In this research, a hybrid estimation of distribution algorithm with differential evolution based on self-adaptive strategy (namely HEDADE-SA) is proposed. An alternative probability model for sampling is utilized for the proposed algorithm. Moreover, a self-adaptive strategy is adopted to make full use of EDA and DE operators. Through these methods, the population diversity and the convergence speed of the algorithm are improved. Experiment results show that HEDADE-SA exhibits superior performance than the other comparison algorithms. In the future, the proposed HEDADE-SA algorithm can be used to deal with large-scale global problems.

References

  1. S.U. Khan, S. Yang, L. Wang, and L. Liu, "A Modified Particle Swarm Optimization Algorithm for Global Optimizations of Inverse Problems," IEEE Transactions on Magnetics, Vol. 52, No. 3, pp. 1-4, 2016. https://doi.org/10.1109/TMAG.2015.2487678
  2. W. Deng, H. Zhao, L. Zou, G. Li, X. Yang et al., "A novel collaborative optimization algorithm in solving complex optimization problems," Soft Computing, Vol. 21, No. 15, pp.4387-4398,2017. https://doi.org/10.1007/s00500-016-2071-8
  3. H. Peng, C. Deng, and Z. Wu, "Best neighbor-guided artificial bee colony algorithm for continuous optimization problems," Soft Computing, vol. 23, No.18, pp. 8723-8740, 2019. https://doi.org/10.1007/s00500-018-3473-6
  4. H. Muhlenbein and G. Paass, "From recombination of genes to the estimation of distributions I. Binary parameters," in Proc. Of international conference on parallel problem solving from nature, pp. 178-187, Springer, Berlin, Heidelberg, 2005. https://doi.org/10.1007/3-540-61723-X_982
  5. C. Liu, H. Chen, R. Xu, and Y. Wang, "Minimizing the resource consumption of heterogeneous batch-processing machines using a copula-based estimation of distribution algorithm," Applied soft computing, Vol. 73, pp. 283-305, 2018. https://doi.org/10.1016/j.asoc.2018.08.036
  6. W. Dong, Y. Wang, and M. Zhou, "A latent space-based estimation of distribution algorithm for large-scale global optimization," Soft Computing, Vol.23, No.13, pp. 4593-4615, 2019. https://doi.org/10.1007/s00500-018-3390-8
  7. Q. Yang, W.N. Chen, Y. Li, C.P. Chen, X.M. Xu, and J. Zhang, "Multimodal Estimation of Distribution Algorithms," IEEE Transactions on cybernetics, Vol. 47, No. 3, pp. 636-650, 2017. https://doi.org/10.1109/TCYB.2016.2523000
  8. S. Gao, and C.W. de Silva, "Estimation distribution algorithms on constrained optimization problems," Applied Mathematics and Computation, Vol. 339, pp. 323-345, 2018. https://doi.org/10.1016/j.amc.2018.07.037
  9. M. Hauschild, and M. Pelikan, "An introduction and survey of estimation of distribution algorithms," Swarm and evolutionary computation, Vol. 1, No. 3, pp. 111-128, 2011. https:/doi.org/10.1016/j.swevo.2011.08.003
  10. R. Storn and K. Price, "Differential evolution: a simple and efficient heuristic for global optimization over continuous spaces," Journal of Global Optimization, Vol. 11, No. 4, pp. 341-359, 1997. https://doi.org/10.1023/A:1008202821328
  11. S. Das, S.S. Mullick, and P.N. Suganthan, "Recent advances in differential evolution - an updated survey," Swarm and Evolutionary Computation, Vol. 27, pp. 1-30, 2016. https://doi.org/10.1016/j.swevo.2016.01.004
  12. J.Y. Sun, Q. Zhang, and E.P. Tsang, "DE/EDA: A new evolutionary algorithm for global optimization," Information sciences, Vol. 169, No. 3-4, pp. 249-262, 2005. https://doi.org/10.1016/j.ins.2004.06.009
  13. W. Shao, L. Shang, L. Ma, Z. Shao, and X. Ying, "Hybrid differential evolution/estimation of distribution algorithm based on adaptive incremental learning," Journal of Computational Information Systems, Vol. 10, No. 12, pp. 5355-5364, 2014. https://www.semanticscholar.org/paper/Hybrid-differential-evolution%2Festimation-of-based-Shao-Shang/8c12307eb372622ee30d34e7af3abb2bb205e4bf
  14. B. Dong, A. Zhou, and G. Zhang, "A hybrid estimation of distribution algorithm with differential evolution for global optimization," In Proc. Of 2016 IEEE Symposium Series on Computational Intelligence, pp. 1-7, 2016. https://doi.org/10.1109/SSCI.2016.7850201
  15. H. Fang, A. Zhou, and H. Zhang, "Information fusion in offspring generation: A case study in DE and EDA," Swarm and evolutionary computation, Vol. 42, pp. 99-108, 2018. https://doi.org/10.1016/j.swevo.2018.02.014
  16. S.L. Wang, T.F. Ng, and F. Morsidi, "Self-adaptive Ensemble Based Differential Evolution," International Journal of Machine Learning and Computing, Vol. 8, No. 3, pp. 286-293, 2018. https://doi.org/10.18178/ijmlc.2018.8.3.701
  17. H. Liu, J. Han, L. Yuan, and B. Yu, "Self-adaptive bare-bones differential evolution based on bi-mutation strategy," Journal on Communications, Vol.38, No.8, pp.201-212, 2017. https://doi.org/10.11959/j.issn.1000-436x.2017051
  18. S. Wang, Y. Li, and H. Yang, "Self-adaptive mutation differential evolution algorithm based on particle swarm optimization," Applied Soft Computing, Vol. 81, pp. 1-22, 2019. https://doi.org/10.1016/j.asoc.2019.105496
  19. M.Z. Ali, N.H. Awad, P.N. Suganthan, and R.G. Reynolds, "An adaptive multipopulation differential evolution with dynamic population reduction," IEEE transactions on cybernetics, Vol. 47, No. 9, pp. 2768-2779, 2017. https://doi.org/10.1109/TCYB.2016.2617301
  20. L.B. Deng, L.L. Zhang, N. Fu, H.L. Sun, and L.Y. Qiao, "ERG-DE: An Elites Regeneration Framework for Differential Evolution," Information Sciences, Vol.539, pp. 81-103, 2020. https://doi.org/10.1016/j.ins.2020.05.108
  21. J. Liu, H. Peng, Z. Wu, J. Chen, and C. Deng, "Multi-strategy brain storm optimization algorithm with dynamic parameters adjustment," Applied Intelligence, Vol. 50, No. 4, pp. 1289-1315, 2020. https://doi.org/10.1007/s10489-019-01600-7
  22. P. N. Suganthan, N. Hansen, J.J. Liang, K. Deb, Y. Chen et al., "Problem definitions and evaluation criteria for the CEC 2005 special session on real-parameter optimization," Technical Report, Singapore: Nanyang Technological University, 2005. https://www3.ntu.edu.sg/home/epnsugan/index_files/CEC-05/CEC05.htm
  23. P. Larranaga, R. Etxeberria, J.A. Lozano, and J.M. Pena, "Optimization in continuous domains by learning and simulation of Gaussian networks," In Proc. Of the 2000 Genetic and Evolutionary Computation Conference Workshop Program, pp. 201-204, 2000. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.31.3105
  24. J. Brest, S. Greiner, B. Boskovic, M. Mernik, and V. Zumer, "Self-adapting control parameters in differential evolution: A comparative study on numerical benchmark problems," IEEE transactions on evolutionary computation, Vol.10, No. 6, pp. 646-657, 2006. https://doi.org/10.1109/TEVC.2006.872133