DOI QR코드

DOI QR Code

Study of Personal Credit Risk Assessment Based on SVM

  • LI, Xin (College of Business Administration, Henan University of Science and Technology) ;
  • XIA, Han (College of Business Administration, Henan Finance University)
  • 투고 : 2022.07.18
  • 심사 : 2022.10.15
  • 발행 : 2022.10.30

초록

Purpose: Support vector machines (SVMs) ensemble has been proposed to improve classification performance of Credit risk recently. However, currently used fusion strategies do not evaluate the importance degree of the output of individual component SVM classifier when combining the component predictions to the final decision. To deal with this problem, this paper designs a support vector machines (SVMs) ensemble method based on fuzzy integral, which aggregates the outputs of separate component SVMs with importance of each component SVM. Research design, data, and methodology: This paper designs a personal credit risk evaluation index system including 16 indicators and discusses a support vector machines (SVMs) ensemble method based on fuzzy integral for designing a credit risk assessment system to discriminate good creditors from bad ones. This paper randomly selects 1500 sample data of personal loan customers of a commercial bank in China 2015-2020 for simulation experiments. Results: By comparing the experimental result SVMs ensemble with the single SVM, the neural network ensemble, the proposed method outperforms the single SVM, and neural network ensemble in terms of classification accuracy. Conclusions: The results show that the method proposed in this paper has higher classification accuracy than other classification methods, which confirms the feasibility and effectiveness of this method.

키워드

1. Introduction

Credit risk assessment is essentially a classification problem. It plays an important role in banks credit risk management. Credit risk assessment is the set of decision models and techniques that aid lenders in granting consumer credit by assessing the risk of lending to different consumers. It is an important area of research that enables financial institutions to develop lending strategies to optimize profit. Additionally, bad debt is a growing social problem that could be tackled partly by better informed lending enabled by more accurate risk assessment models. An accurate assessment of risk could be translated into a more efficient use of resources or a less loan loss for a bank.

In recent years, many statistical models like logistic regression and discriminant analysis have been used to make credit risk assessment models (Zhao & Li, 2022; Jadwal et al, 2022). They are mostly used as kinds of binary classifiers in the traditional credit risk studies. Their purpose is to classify customers into good or bad credit accurately. Discriminant analysis and logistic regression have been the most widely used techniques for building credit risk models. Both have the merits of being conceptually straightforward and widely available in statistical software packages. But discriminant analysis is statistically valid only if the independent variables are normally distributed. However, this assumption is often violated (Khemakhem & Youné s Boujelb è ne, 2015). Moreover, a priori probability of failure is needed for the prediction of a failure. It is not always easy to find any sensible estimate for the prior probability of failure although discriminant analysis gives the posterior probability.

These shortcomings have led to the use of the logistic regression model which does not assume multinormality and also gives an estimate for the probability of bad credit (Abid, 2022). This methodology assumes that the predicted values of the probability are cumulatively constrained by the logistic function. Srinivasan and Kim (1987) included logistic regression in a comparative study with other methods for a corporate credit granting problem.

In recently literatures, artificial neural network (Hu & Su, 2022; Mahbobi et al, 2021) and fuzzy theory (Wójcicka-Wójtowicz, 2021; Xie et al, 2022) are usually used to evaluate credit ranks. Applications of decision tree methods in credit scoring are described by Hand and Henley (1997). Neural networks are well suited to situations, where we have a poor understanding of the data structure. In fact, neural networks can be regarded as systems which combine automatic feature extraction with the classification process, i.e. they decide how to combine and transform the raw characteristics in the data, as well as yielding estimates of the parameters of the decision surface. This means that such methods can be used immediately, without a deep grasp of the problem. In general, however, methods, which utilize a good understanding of the data and the problem, might be expected to perform better. The type of neural network that is normally applied to credit scoring problems can be viewed as a statistical model involving linear combinations of nested sequences of non-linear transformations of linear combinations of variables. Tsai (2008) described neural networks to corporate credit decisions and fraud detection. However, some inherent drawbacks neural networks possessed can’t be surmounted. The drawbacks include: 1. Gradient algorithm to train the optimal solution is a local optimal problem often induces that the optimal solution is a local optimal, not global optimal, resulting in the trained network useless. 2. The theory of training a neural network is based on empirical risk minimization (ERM), provided the objective function minimum errors to the training samples, but lack of generalization to unknown samples. 3. The construction designs of neural networks, such as number choose of hidden layer node, strongly depend on users’ experiences, lack of strict design proceeding in theory. 4. Whether the training algorithms of neural networks converge and how speed to converge is uncontrolled.

Fortunately, Vapink (1995) first present a novel pattern identification method, based on statistic learning theory, called support vector machine (SVM). Comparing with artificial neural networks, SVMs have better generalization and more precise categorization, and give rid of ‘dimension calamity’ and ‘overfitting’ (Zhao & Li, 1995). And SVMs are used to forecast time series and evaluate credits, and general SVMs treat all training samples equivalently. Recently, support vector machines ensemble has been proposed and applied in many areas in order to improve classification performance (Anil Kumar et al, 2022; Dou et al, 2020). The experimental results in these applications show that support vector machines ensemble can achieve equal or better classification accuracy with respect to a single support vector machine. However, a common used majority voting aggregation method is adopted in these papers, and this method does not consider the degree of importance of the output of component support vector machines classifier when combining several independently trained SVMs into a final decision. In order to resolve this problem, a support vector machines ensemble strategy based on fuzzy integral is proposed in this paper. And then evaluate credit in C2C mode business based on the proposed method. The experimental result shows that this method is stable, highly accurate, strong robust and feasible. It demonstrates SVMs ensemble is effective for website credit evaluation.

2. Credit Evaluation Indexes

Figure 2 illustrates a credit evaluation system model of customer. The model consists of three parts: credit index system construction, appreciation model, and evaluation results (credit ranks).

The appreciation model part of Figure 1 shows a general scheme of the proposed SVMs ensemble. SVMs ensemble process is divided into following steps:

OTGHCA_2022_v13n10_1_f0001.png 이미지

Figure 1: Credit Evaluation Model

Step 1: Generate m training data sets via bagging from original training set according to 4.1 and train a component SVM using each of those training sets.

Step 2: Obtain probabilistic outputs model of each component SVM according (Platt, 1999).

Step 3: Assign the fuzzy densities { gk ({SVMi }), k = 1,...,c}, the degree of importance of each component SVMi , i=1,...,m, based on how good these SVMs performed on their own training data.

Step 4: Obtain probabilistic outputs of each component SVM when given a new test example.

Step 5: Compute the fuzzy integral ek for wk , k = 1,...,c according to 4.2.

Step 6: Get the final decision \(\begin{aligned}k^{*}=\arg \max _{k} e_{k}\end{aligned}\).

In this paper, the credit indexes are filtrated and listed as Table 1. There are 16 credit indexes.

Table 1: Indexes of Credit Evaluation

OTGHCA_2022_v13n10_1_t0001.png 이미지

3. Credit Evaluation by SVMs Ensemble

3.1 Bagging to Construct Component SVMs

The Bagging algorithm (Zhang, 2021) generates training data sets by randomly re-sampling, but with replacement, from the given original training data set. Each training set will be used to train a component SVM. The component predictions are combined via fuzzy integral.

3.2. SVMs Ensemble Based on Fuzzy Integral

In the following, we introduce the basic theory about the fuzzy integral (Kwak & Pedrycz, 2005).

Let X = {x1, x2,..., xn} be a finite set. A set function g : 2x → [0,1] is called a fuzzy measure if the following conditions are satisfied:

1·g(𝜙) = 0, g(X) = 1

2·g(A)≤g(B), 当A ⊂ B且A, B ⊂ X       (1)

From the definition of fuzzy measure g, Sugeno developed a so-called gλ fuzzy measure satisfying an additional property:

g(A∪B) = g(A) + g(B) + λg(A)g(B)       (2)

For all A, B ⊂ X and A ∩ B=𝜙 , and for some λ >-1.

Let h : X→ [0,1] be a fuzzy subset of X and use the notation Ai = {x1, x2,..., xi}. For being a gλ fuzzy measure, the value of g(Ai) can be determined recursively as

\(\begin{aligned}\left\{\begin{array}{l}g\left(A_{1}\right)=g\left(\left\{x_{1}\right\}\right)=g_{1} \\ g\left(A_{i}\right)=g_{i}+g\left(A_{i-1}\right)+\lambda g_{i} g\left(A_{i-1}\right), g_{i}=g\left(\left\{x_{i}\right\}\right), 1<i \leq n\end{array}\right.\end{aligned}\)       (3)

λ is given by solving the following equation

\(\begin{aligned}\lambda+1=\prod_{i=1}^{n}\left(1+\lambda g_{i}\right)\end{aligned}\)       (4)

Where λ ∈ (-1, +∞) and λ ≠ 0.

Suppose h(x1) ≥ h(x2) ≥...≥h(xn), (if not, X is rearranged so that this relation holds). Then the so-called Sugeno fuzzy integral e with respect to gλ fuzzy measure over X can be computed by

\(\begin{aligned}e=\max _{i=1}^{n}\left[\min \left(h\left(x_{i}\right), g\left(A_{i}\right)\right)\right]\end{aligned}\)       (5)

Thus the calculation of the fuzzy integral with respect to a gλ fuzzy measure would only require the knowledge of the fuzzy densities, where the ith density gi is interpreted as the degree of importance of the source xi towards the final decision. These fuzzy densities can be subjectively assigned by an expert or can be generated from training data.

Let Ω = {ω1, ω2,...,ωc} be a set of classes of interest and S = {SVM1,SVM2,...,SVMm} be a set of component SVMs. Let: : hk: S →[0,1] be the belief degree of a new sample x belongs to class ωk , that is hk(SVMi) is the probability in the classification of a new sample x to be in class ωk using SVMsi. If we get { hk(SVMi),i = 1,2,...,m } and know fuzzy densities { gk(SVMi),i = 1,2,...,m } , the fuzzy integral ek for class ωk can be calculated using (3) to (5). When the fuzzy integral values {ek , k = 1,..., c} are obtained, we can get the final decision: \(\begin{aligned}k^{*}=\arg \max _{k} e_{k}\end{aligned}\).

3.3. Assignment of Fuzzy Density

While to SVMs ensemble, the critical issue is how to effectively assign the fuzzy densities, which are significant to the values of the fuzzy integral. There are a number of ways to obtain the fuzzy densities (Zhang et al, 2015; Roy & Shaw, 2022), but they were only base on accuracy, and did not consider the uncertainty presented in the process of recognition. The densities values here are generated based on both accuracy and uncertainty. The procedure of calculating the densities is described as table 2.

Table 2: Outputs of the Classifier xj

OTGHCA_2022_v13n10_1_t0002.png 이미지

Suppose there are l training samples: y1, y2,...,yl, presume Yl = {y1, y2,...,yl}. For each classifier xj we can obtain sets of outputs in Table 2

For class C1 , it can be regarded as a fuzzy set definite on Yl , namely

\(\begin{aligned}C_{l}^{j}=\left[\begin{array}{llll}y_{1} & y_{2} & \ldots & y_{l} \\ h_{1 l}^{j} & h_{2 l}^{j} & \ldots & h_{n l}^{j}\end{array}\right]\end{aligned}\)       (6)

Where h1lj denotes C1 from the point of view of xj, and its vagueness can be measured by:

\(\begin{aligned}E_{v}\left(C_{1}^{j}\right)=-\frac{1}{l} \sum_{i=1}^{l} h_{1 t}^{j} \ln h_{1 t}^{j}+\left(1-h_{1 t}^{j}\right) \ln \left(1-h_{1 t}^{j}\right)\end{aligned}\)       (7)

Ev(C1j) represents the vagueness of C1 under the condition of xj . When h1lj = 0.5 for all t, t = 1,2,..., l, Ev(C1j) = 1 which represents the greatest vagueness. When h1lj = 0 or 1 for all t, t = 1,2,..., l, Ev(C1j) = 0, which represents no vagueness. Because the output is obtained by xj , it can be interpreted as the cognitive ability of xj with respect to the class C1 . Similarity, we can calculate the cognitive ability that xi towards the other classes, and finally we obtain the following matrix.

\(\begin{aligned}\left[\begin{array}{cccc}E_{v}^{1}\left(C_{1}\right) & E_{v}^{1}\left(C_{2}\right) & \ldots & E_{v}^{1}\left(C_{n}\right) \\ E_{v}^{2}\left(C_{1}\right) & E_{v}^{2}\left(C_{2}\right) & \ldots & E_{v}^{2}\left(C_{n}\right) \\ \ldots & \ldots & \ldots & \ldots \\ E_{v}^{m}\left(C_{1}\right) & E_{v}^{m}\left(C_{2}\right) & \ldots & E_{v}^{m}\left(C_{n}\right)\end{array}\right]\end{aligned}\)       (8)

Where Ev(C1j) i = 1,2,...,n; j = 1,2,...,m represent the sensitivity of xj with respect to class Ci .

On the other hand, the performance matrix that is a n × n matrix can be formed base on the output of classifiers.

\(\begin{aligned}\left[\begin{array}{cccc}u_{11}^{j} & u_{12}^{j} & \ldots & u_{1 n}^{j} \\ u_{21}^{j} & u_{22}^{j} & \ldots & u_{2 n}^{j} \\ \ldots & \ldots & \ldots & \ldots \\ u_{n 1}^{j} & u_{n 2}^{j} & \ldots & u_{n n}^{j}\end{array}\right]\end{aligned}\)       (9)

where uipj( i, p = 1,2,...,n) represents the number of training data belonging to class Ci being classified as class Cp , by classifier xj , and the data is classified correctly when i = p .Thus:

\(\begin{aligned}A_{i}^{j}=\frac{u_{i i}^{j}}{\sum_{p=1}^{n} u_{i p}^{j}}\end{aligned}\)        (10)

is interpreted as the degree in which classifier xj identifies class Ci correctly. Then fuzzy densities can be defined as:

gij = Aij × e-Ev(Cij)       (11)

In the case of Ev(C1j) = 0, there is no uncertainty, which indicates that classifier xj has clear conception of the class Ci , in other words, the classifier xj has the highest discriminability to class Ci ,so the fuzzy density is the classification rate; In the case of Ev(C1j) = 1, there are the largest uncertainty, which indicates classifier xj has no ability to distinguish the class Ci , so the fuzzy density is the minimal.

After determining the fuzzy densities, the gλ -fuzzy measure and the fuzzy integral can be computed. The final output class from the combination classifier is the one with the highest integrate value.

4. Experimental Results

In this paper, the experimental data came from a commercial bank, we randomly select selected 1500 customers’ samples, in which 1279 samples are good, called ‘good’ customers, and the other 221 samples’ conditions are bad, they will mostly fell back, called ‘bad’ customers. However, the two categories are unequal, which is not suitable for SVMs ensemble training. SVMs ensemble training needs the two categories’ amount similarity. If we train the gained sample directly, the optimal separating hyperplane will prefer to smaller density category (Figure 2), its which will lead to much sorting error. In order to get a good performance sort, we must deal with samples to balance the two categories. We set Different assembled penalties result in different performance.

OTGHCA_2022_v13n10_1_f0002.png 이미지

Figure 2: Unequal Samples Classification

In this paper we take the method of constantly approaching to gain Ci ( i = 1,2 ), which is the tow type of samples’ penalties. At the process it must satisfy the following formula:

\(\begin{aligned}\frac{1}{2}(W \bullet W)=\sum_{i=1}^{n} C_{i} \xi_{i}\end{aligned}\)       (12)

Where

\(\begin{aligned}C_{i}=\left\{\begin{array}{ll}C_{1} & X_{i} \in \Xi_{1} \\ C_{2} & X_{i} \notin \Xi_{1}\end{array}\right.\end{aligned}\)       (13)

And C1 , C2 must satisfy

\(\begin{aligned}\frac{C_{1}}{C_{2}}=\frac{N_{2}}{N_{1}}\end{aligned}\)       (14)

So in this paper, \(\begin{aligned}\frac{C_{2}}{C_{1}}=6\end{aligned}\) ( Ξ1 means the good customers’ sample). And we get C1 =200, and C2 =1200.

In this paper, we select half of each dataset randomly for training (‘good’: 639; ‘bad’: 110), and the residual samples (‘good’: 640; ‘bad’: 111) for testing the model’s forecasting accuracy.

For bagging, we re-sample randomly 70 samples with replacement from the original training data set. We train three component SVMs based on different penalties independently over the three training data sets generated by bagging and aggregate three trained SVMs via fuzzy integral. Each component SVM uses Polynomial kernel function:

K(x, xi) = [x•xi) + c]q

K(x, xi) = exp(-| x-xi |2 / σ2)       (15)

And the corresponding parameters are selected by five-fold cross-validation. To avoid the tweak problem, ten experiments are performed and the average performance in training and test datasets is reported in table 3 and table 4. (‘0’ means ‘good’, and ‘1’ means ‘bad’). As Practical problems are Complex, this accuracy is acceptable for credit evaluation.

Table 3: Training Set Correct Rate

OTGHCA_2022_v13n10_1_t0004.png 이미지

Table 4: Test Set Correct Rate

OTGHCA_2022_v13n10_1_t0005.png 이미지

The comparison of single SVM, SVMs ensemble via majority voting, single neural network, and fuzzy neural network ensemble is shown in Figure 3. On Figure 3, from left to right: 1: Single SVMs, 2: SVMs ensemble via majority voting, 3: SVMs ensemble via fuzzy integral, 4: Single neural network, 5: Fuzzy neural network ensemble.

OTGHCA_2022_v13n10_1_f0003.png 이미지

Figure 3: The Results of Classification Accuracy C omparison

In addition to examining average prediction accuracy of SVMs ensemble via fuzzy integral, we compare Type I and Type II of single SVM, SVMs ensemble via majority voting, SVMs ensemble via fuzzy integral, single neural network, and fuzzy neural network ensemble. Table5 presents the average error rate (%) of Type I error and Type II error over the single SVM, SVMs ensemble via majority voting, SVMs ensemble via fuzzy integral, single neural network, and fuzzy neural network ensemble under the training datasets. (•Type I error: This shows the rate of prediction errors of a model, which is to incorrectly classify the bad credit group into the good credit group. •Type II error: Opposed to Type I error, this presents the rate of prediction errors of a model to incorrectly classify the good credit group into the bad credit group.) On average, SVMs ensemble via fuzzy integral is the winner as the best model/classifier architecture for credit scoring. From table 5, SVMs ensemble via fuzzy integral has least Type I error and Type II error. It also proves the proposed method stable, highly accurate, strong robust and feasible.

Table 5: Average Error Rate of TypeⅠ and TypeⅡ Error under the Training Datasets

OTGHCA_2022_v13n10_1_t0003.png 이미지

Recent studies suggest combining multiple classifiers (or classifier ensembles) should be better than single classifiers. SVMs ensemble proves the conclusion. SVMs training classifiers gain better classification result than single SVM. However, the performance of the single neural network classifier outperforms the (diversified) multiple neural network classifiers, the main reason is the divided training datasets may be too little to make the multiple classifiers for neural network classifier. While SVMs training classifiers performs better on small datasets, so SVMs ensembles accuracy is higher than neural network ensemble.

5. Conclusions

With the development of electronic commerce, increasingly bargaining and trade have been achieved in the websites, so trade security and customer credit are taken into account widely. The problem of how to evaluate personal credit is more and more important. In this paper, we constructed credit index system of customer: there are sixteen indexes in detail. We propose a support vector machines ensemble strategy based on fuzzy integral for classification. The most important advantage of this approach is that not only are the classification results combined but also that the relative importance of the different component SVMs classifier is also considered. Different from previous studies, the fuzzy integral support vector machine integration model constructed in this paper considers the output importance of a single support vector machine classifier. This method has higher stability, accuracy and robustness, which proves the effectiveness of fuzzy integral support vector machine integration in website credit evaluation. The research conclusion of this paper is conducive to further improving the efficiency and accuracy of credit risk assessment of individual customers of commercial banks, and provides a basis for commercial banks to effectively conduct credit risk assessment in the era of big data.

However, this paper mainly uses the data of personal loan customers of Chinese commercial banks for simulation, which has certain limitations. Future research will extend the sample data to international commercial banks to test the expanded application of the method proposed in this paper in international commercial banks.

참고문헌

  1. Abid, L. (2022). A Logistic Regression Model for Credit Risk of Companies in the Service Sector. International Research in Economics and Finance, 6(2), 1-10. https://doi.org/10.20849/iref.v6i2.1179
  2. Anil Kumar, C. J., Raghavendra, B. K., & Raghavendra, S. (2022). A Credit Scoring Heterogeneous Ensemble Model Using Stacking and Voting. Indian Journal of Science and Technology, 15(7), 300-308.
  3. Cortes, C., & Vapnik, V. (1995). Support-vector networks. Machine learning, 20(3), 273-297. https://doi.org/10.1007/BF00994018
  4. Dou, J., Yunus, A. P., Bui, D. T., Merghadi, A., Sahana, M., Zhu, Z., … & Pham, B. T. (2020). Improved landslide assessment using support vector machine with bagging, boosting, and stacking ensemble machine learning framework in a mountainous watershed, Japan. Landslides, 17(3), 641-658. https://doi.org/10.1007/s10346-019-01286-5
  5. Hand, D. J., & Henley, W. E. (1997). Statistical classification methods in consumer credit scoring: a review. Journal of the Royal Statistical Society: Series A (Statistics in Society), 160(3), 523-541. https://doi.org/10.1111/j.1467-985X.1997.00078.x
  6. Hu, Y., & Su, J. (2022). Research on Credit Risk Evaluation of Commercial Banks Based on Artificial Neural Network Model. Procedia Computer Science, 199(1), 1168-1176. https://doi.org/10.1016/j.procs.2022.01.148
  7. Jadwal, P. K., Pathak, S., & Jain, S. (2022). Analysis of clustering algorithms for credit risk evaluation using multiple correspondence analysis. Microsystem Technologies, 56(3), 1-7.
  8. Kim, H. C., Pang, S., Je, H. M., Kim, D., & Bang, S. Y. (2003). Constructing support vector machine ensemble. Pattern recognition, 36(12), 2757-2767. https://doi.org/10.1016/S0031-3203(03)00175-4
  9. Khemakhem, S., & Boujelbene, Y. (2015). Credit risk prediction: A comparative study between discriminant analysis and the neural network approach. Accounting and Management Information Systems, 14(1), 60-78.
  10. Kwak, K. C., & Pedrycz, W. (2005). Face recognition: A study in information fusion using fuzzy integral. Pattern Recognition Letters, 26(6), 719-733. https://doi.org/10.1016/j.patrec.2004.09.024
  11. Lai, R. K., Fan, C. Y., Huang, W. H., & Chang, P. C. (2009). Evolving and clustering fuzzy decision tree for financial time series data forecasting. Expert Systems with Applications, 36(2), 3761-3773. https://doi.org/10.1016/j.eswa.2008.02.025
  12. Mahbobi, M., Kimiagari, S., & Vasudevan, M. (2021). Credit risk classification: an integrated predictive accuracy algorithm using artificial and deep neural networks. Annals of Operations Research,40(7), 1-29.
  13. Platt, J. (1999). Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in large margin classifiers, 10(3), 61-74.
  14. Roy, P. K., & Shaw, K. (2022). Developing a multi-criteria sustainable credit score system using fuzzy BWM and fuzzy TOPSIS. Environment, Development and Sustainability, 24(4), 5368-5399. https://doi.org/10.1007/s10668-021-01662-z
  15. Srinivasan, V., & Kim, Y. H. (1987). Credit granting: A comparative analysis of classification procedures. The Journal of Finance, 42(3), 665-681. https://doi.org/10.1111/j.1540-6261.1987.tb04576.x
  16. Tsai, C. F., & Wu, J. W. (2008). Using neural network ensembles for bankruptcy prediction and credit scoring. Expert systems with applications, 34(4), 2639-2649. https://doi.org/10.1016/j.eswa.2007.05.019
  17. Wojcicka-Wojtowicz, A., & Piasecki, K. (2021). Application of the Oriented Fuzzy Numbers in Credit Risk Assessment. Mathematics, 9(5), 535-536. https://doi.org/10.3390/math9050535
  18. Xie, X., Hu, X., Xu, K., Wang, J., Shi, X., & Zhang, F. (2022). Evaluation of associated credit risk in supply chain based on trade credit risk contagion. Procedia Computer Science, 199(1), 946-953. https://doi.org/10.1016/j.procs.2022.01.119
  19. Zhang, L., Hu, H., & Zhang, D. (2015). A credit risk assessment model based on SVM for small and medium enterprises in supply chain finance. Financial Innovation, 1(1), 1-21. https://doi.org/10.1186/s40854-015-0007-4
  20. Zhang, T., Fu, Q., Wang, H., Liu, F., Wang, H., & Han, L. (2021). Bagging-based machine learning algorithms for landslide susceptibility modeling. Natural hazards,49(4), 1-24.
  21. Zhao, J., & Li, B. (2022). Credit risk assessment of small and medium-sized enterprises in supply chain finance based on SVM and BP neural network. Neural Computing and Applications,34(8), 12467-12478. https://doi.org/10.1007/s00521-021-06682-4