DOI QR코드

DOI QR Code

Credit Risk Evaluations of Online Retail Enterprises Using Support Vector Machines Ensemble: An Empirical Study from China

  • LI, Xin (Henan University of Science and Technology) ;
  • XIA, Han (School of Business Administration, Henan Finance University)
  • Received : 2022.04.30
  • Accepted : 2022.08.08
  • Published : 2022.08.30

Abstract

The e-commerce market faces significant credit risks due to the complexity of the industry and information asymmetries. Therefore, credit risk has started to stymie the growth of e-commerce. However, there is no reliable system for evaluating the creditworthiness of e-commerce companies. Therefore, this paper constructs a credit risk evaluation index system that comprehensively considers the online and offline behavior of online retail enterprises, including 15 indicators that reflect online credit risk and 15 indicators that reflect offline credit risk. This paper establishes an integration method based on a fuzzy integral support vector machine, which takes the factor analysis results of the credit risk evaluation index system of online retail enterprises as the input and the credit risk evaluation results of online retail enterprises as the output. The classification results of each sub-classifier and the importance of each sub-classifier decision to the final decision have been taken into account in this method. Select the sample data of 1500 online retail loan customers from a bank to test the model. The empirical results demonstrate that the proposed method outperforms a single SVM and traditional SVMs aggregation technique via majority voting in terms of classification accuracy, which provides a basis for banks to establish a reliable evaluation system.

Keywords

1. Introduction

Business websites make it easy and quick to inquire, communicate, barter, trade, pay, and get services. According to CNNIC’s 2013 analysis, China’s Internet penetration rate has increased to 45.8%, while the country’s online retail market for social consumer goods has increased to 7.4%. There are 302 million Internet users in total, and those who engage in online trading make up 48.9% of those users. According to CNNIC’s 2021 analysis, China’s Internet penetration rate has increased to 73.0%, while the country’s online retail market for social consumer goods has increased to 24.5%. There are 842 million Internet users in total, and those who engage in online trading make up 81.6% of those users. Every year, there is a rapid increase in the number of online shoppers. Compared with the traditional way of shopping, online shopping has many advantages, such as a wide range of product selections, better meeting the personalized needs of consumers, low-cost shopping, convenient transaction, easy price, and so on. However, the rate of net purchase for successful transactions is very low because of the situation that an increased risk accompanies the rapid development of the online shopping market. With the further popularization and growth of the online shopping concept, there is still much space for Chinese online retail growth, and the most interesting problem affecting the development of online retail is credit and trust. Enterprises credit is especially significant for developing e-commerce platforms (Isnurhadi et al., 2021), so it directly affects the healthy development of Chinese electronic commerce. Hence, it is very urgent to construct an evaluation system and method for credit.

The credit system is a vital condition to guarantee the development of electronic commerce (Pham, 2021), thus personal credit evaluation is a worthy study in the e-environment. The development of a credit system that is valuable to credit facilitating agencies can be assisted by research on credit evaluation. Credit reports can be made legally by evaluating credit, collecting credit history, and paying to send credit reports to legal users. Meanwhile, to implement e-commerce activities safely and easily, we can store enterprises’ creditworthiness in a network.

2. Literature Review

Essentially, credit evaluation is a regression, category, and pattern identification problem. Many statistical models like logistic regression and discriminant analysis have been used to make credit risk models (Pham, 2022). They are mostly used as kinds of binary classifiers in traditional credit risk studies (Le & Diep, 2020; Rizwan-ul-Hassan et al., 2021). The purpose is to appropriately categorize businesses as having excellent or negative credit. For creating credit risk models, discriminant analysis and logistic regression have been the most often utilized methods. Both have the advantages of being easily accessible in statistical software packages and conceptually simple. Statistics only support discriminant analysis when the independent variables are normally distributed. This presumption is, however, often violated. Additionally, to anticipate a failure, an a priori likelihood of failure is required (Naili & Lahrichi, 2022). Despite discriminant analysis providing the posterior probability, it is not always simple to determine any reasonable estimate for the prior likelihood of failure.

Due to these drawbacks, the logistic regression model is used, which gives an estimate of the likelihood of having bad credit and does not assume multi-normality (Abid, 2022). This methodology assumes that the predicted values of the probability are cumulatively constrained by the logistic function. For an issue involving corporate loan issuance, Srinivasan and Kim (1987) included logistic regression in a comparison study with other approaches. Leonard and Villiers (2000) investigated various models, including a model with random effects, and applied logistic regression to a process of evaluating credit rating.

In recent literature, artificial neural networks (Khashman, 2010) and fuzzy theory (Shen et al., 2018) are usually used to evaluate credit ranks. Decision tree applications in credit scoring are discussed. In situations where our understanding of the data structure is limited, neural networks perform effectively. In fact, neural networks may be viewed as systems that integrate automatic feature extraction with the classification process since they have the freedom to mix and modify the data’s raw features and provide estimates of the decision surface’s parameters. This indicates that such techniques don’t require a thorough understanding of the problem before application. However, in general, it may be assumed that solutions that make use of a solid understanding of the data and the issue will perform better. The type of neural network that is normally applied to credit scoring problems can be viewed as a statistical model involving linear combinations of nested sequences of non-linear transformations of linear combinations of variables. Tsai and Wu (2008) described neural networks for corporate credit decisions and fraud detection. However, some inherent drawbacks neural networks possessed can’t be surmounted (Abellán & Castellano, 2017). The shortcomings include 1. The trained network is ineffective because the gradient approach used to train it frequently leads to the conclusion that the optimal solution is a local ideal rather than a global optimal. 2. Empirical risk minimization (ERM) serves as the theoretical foundation for training neural networks, provided the objective function minimizes errors in training samples but lacks generalization to unidentified samples. 3. Due to the lack of a rigid design process in principle, the construction designs of neural networks, such as the number of hidden layer nodes selected, heavily depend on user experiences. 4. Whether neural network training techniques converge and how quickly they do so are uncontrolled.

Fortunately, a support vector machine (SVM) was presented as a novel pattern identification method, based on statistic learning theory (Moula et al., 2017). SVMs have a superior generalization and more accurate categorization compared to artificial neural networks, and they also eliminate “dimension calamity” and “overfitting.” Additionally, SVMs are used to forecast time series and assess credits, and in general, SVMs consider every training sample equally. Recently, a support vector machine ensemble has been proposed and applied in many areas to improve classification performance (Yao et al., 2022; Li, 2018; Liu et al., 2022). The experimental results in these applications show that a support vector machine ensemble can achieve equal or better classification accuracy with respect to a single support vector machine. However, a commonly used majority voting aggregation method is adopted in these papers, and this method does not consider the degree of importance of the output of component support vector machines classifier when combining several independently trained SVMs into a final decision. To resolve this problem, a support vector machines ensemble strategy based on fuzzy integral is proposed in this paper. And then, we evaluate credit in B2C mode business based on the proposed method. The experimental result shows that this method is stable, highly accurate, robust, and feasible. It demonstrates that SVMs ensemble is effective for website credit evaluation.

3. Credit Evaluation Indexes

It illustrates a credit evaluation system model of enterprises in Figure 1. The model consists of three parts: credit index system construction, appreciation model, and evaluation results (credit ranks).

OTGHEU_2022_v9n8_89_f0001.png 이미지

Figure 1: Credit Evaluation Model

The appreciation model part of Figure 1 shows a general scheme of the proposed SVMs ensemble. SVMs ensemble process is divided into the following steps:

Step 1: Generate m training data sets via bagging from the original training set according to 4.1 and train a component SVM using each of those training sets.

Step 2: Obtain the probabilistic outputs model of each component SVM according (Platt, 1999).

Step 3: Assign the fuzzy densities {gk({SVMi}), k = 1, ..., c}, the degree of importance of each component SVMi, i = 1, ..., m, based on how good these SVMs performed on their own training data.

Step 4: Obtain probabilistic outputs of each component SVM when given a new test example.

Step 5: Compute the fuzzy integral ek for wk, k = 1, ..., c according to 4.2.

Step 6: Get the final decision \(\begin{aligned}k^{*}=\arg \max _{k} e_{k}\\\end{aligned}\).

The first is the selection of the double or triple standard deviation test method to eliminate the anomalous data as part of the sample robustness processing. By factor analysis, we retain the impact indicators of factors more than 0.01. In this paper, the credit indexes are filtrated and listed in Table 1. There are 30 credit indexes.

Table 1: Credit Index Based on Factor Score Coefficient Matrix

OTGHEU_2022_v9n8_89_t0005.png 이미지

4. Credit Evaluations by SVMs Ensemble

4.1. Bagging to Construct Component SVMs

The Bagging algorithm (Bauer & Kohavi, 1999) generates K training data sets {TRk, k = 1, 2, ..., K} by randomly re-sampling, but with replacement, from the given original training data set TR. Each training set TRk will be used to train a component SVM. The component predictions are combined via fuzzy integral.

4.2. SVMs Ensemble Based on Fuzzy Integral

In the following, we introduce the basic theory of the fuzzy integral (Kwak & Pedrycz, 2005).

The following conditions are satisfied:

1. g(∅) = 0

2. g(X) = 1

3. g(A) ≤ g(B) if A ⊆ B and A, B ⊂ X       (1)

satisfying an additional property:

g(A ∪ B) = g(A) + g(B) + λg(A)g(B)      (2)

For all A, B ⊂ X and A ∩ B = ϕ, and for some λ > –1.

From the definition of fuzzy measure g, Sugeno developed a so-called gλ fuzzy measure

Let h: X → [0, 1] be a fuzzy subset of X and use the notation Ai = {x1, x2, ..., xi}. For being a gλ fuzzy measure, the value of g(Ai) can be determined recursively as

\(\begin{aligned} \begin{cases} g\left(A_{1}\right) & =g\left(\left\{x_{1}\right\}\right)=g_{1} \\ g\left(A_{i}\right) & =g_{i}+g\left(A_{i-1}\right)+\lambda g_{i} g\left(A_{i-1}\right), g_{i} \\ & =g\left(\left\{x_{i}\right\}\right), 1 < i ≤n \end{cases} \end{aligned} \)       (3)

λ is given by solving the following equation

\(\begin{aligned}\lambda+1=\prod_{i=1}^{n}\left(1+\lambda g_{i}\right)\\\end{aligned}\)       (4)

Where λ ∈ (–1, +∞) and λ ≠ 0.

Suppose h(x1) ≥ h(x2) ≥ ... ≥ h(xn),(if not, X is rearranged so that this relation holds). Then the so-called Sugeno fuzzy integral e with respect to gλ fuzzy measure over X can be computed by

\(\begin{aligned}e=\max _{i=1}^{n}\left[\min \left(h\left(x_{i}\right), g\left(A_{i}\right)\right)\right]\\\end{aligned}\)       (5)

Thus the calculation of the fuzzy integral with respect to a gλ fuzzy measure would only require the knowledge of the fuzzy densities, where the ith density gi is interpreted as the degree of importance of the source xi towards the final decision. These fuzzy densities can be subjectively assigned by an expert or can be generated from training data.

Let Ω = {ω1, ω2,...,ωc} be a set of classes of interest and S = {SVM1, SVM2, ..., SVMm} be a set of component SVMs. Let: hk : S → [0, 1] be the belief degree of a new sample x belongs to the class ωk, that is hk (SVMi ) is the probability in the classification of a new sample x to be in class ωk using SVMsi. If we get {hk(SVMi), i = 1, 2, ..., m} and know fuzzy densities{gk(SVMi), i = 1, 2, ..., m}, the fuzzy integral ek for class ωk can be calculated using (3) to (5). When the fuzzy integral values {ek, k = 1, ..., c} are obtained, we can get the final decision:

\(\begin{aligned}k^{*}=\arg \max _{k} e_{k}\\\end{aligned}\)       (6)

4.3. Assignment of Fuzzy Density

While to SVMs ensemble, the critical issue is how to effectively assign the fuzzy densities, which are significant to the values of the fuzzy integral. There are a number of ways to obtain fuzzy densities (Manna et al., 2022), but they were only based on accuracy and did not consider the uncertainty presented in the process of recognition. The densities values here are generated based on both accuracy and uncertainty. The procedure for calculating the densities is described in Table 2.

Table 2: Outputs of the Classifier xj

OTGHEU_2022_v9n8_89_t0001.png 이미지

Suppose there are l training samples: y1, y2, ..., yl, presume Yl = {y1, y2, ..., yl}. For each classifier, xj we can obtain sets of outputs in Table 2.

For class C1, it can be regarded as a fuzzy set definite on Yl, namely

\(\begin{aligned}C_{l}^{j}=\left[\begin{array}{llll}y_{1} & y_{2} & \ldots & y_{l} \\ h_{1 l}^{j} & h_{2 l}^{j} & \ldots & h_{n l}^{j}\end{array}\right]\\\end{aligned}\)       (7)

Where h1lj denotes C1 from the point of view of xj, and its vagueness can be measured by (Yuan & Shaw,1995):

\(\begin{aligned}E_{v}\left(C_{1}^{j}\right)=-\frac{1}{l} \sum_{i=1}^{l} h_{1 t}^{j} \ln h_{1 t}^{j}+\left(1-h_{1 t}^{j}\right) \ln \left(1-h_{1 t}^{j}\right)\\\end{aligned}\)       (8)

Ev(C1j) represents the vagueness of C1 the condition xj . When h1tj = 0.5 for all t, t = 1, 2, ..., l, Ev(C1j) = 1, which represents the greatest vagueness. When h1tj = 0 or 1 for all t, t = 1, 2, ..., l, Ev(C1j) = 0, which represents no vagueness (Zhao & Li, 2022; Jiang et al., 2018; Li, 2018). Because the output is obtained xj, it can be interpreted as the cognitive ability xj with respect to the class C1. Similarly, we can calculate the cognitive ability that xi towards the other classes, and finally, we obtain the following matrix.

\(\begin{aligned}\left[\begin{array}{cccc}E_{v}^{1}\left(C_{1}\right) & E_{v}^{1}\left(C_{2}\right) & \ldots & E_{v}^{1}\left(C_{n}\right) \\ E_{v}^{2}\left(C_{1}\right) & E_{v}^{2}\left(C_{2}\right) & \ldots & E_{v}^{2}\left(C_{n}\right) \\ \ldots & \ldots & \ldots & \ldots \\ E_{v}^{m}\left(C_{1}\right) & E_{v}^{m}\left(C_{2}\right) & \ldots & E_{v}^{m}\left(C_{n}\right)\end{array}\right]\\\end{aligned}\)       (9)

Where Ev(C1j) i = 1, 2, ...., n; j = 1, 2, ..., m represent the sensitivity xj with respect to class Ci.

On the other hand, the performance matrix that is n × n matrix can be formed based on the output of classifiers.

\(\begin{aligned}\left[\begin{array}{cccc}u_{11}^{j} & u_{12}^{j} & \ldots & u_{1 n}^{j} \\ u_{21}^{j} & u_{22}^{j} & \ldots & u_{2 n}^{j} \\ \ldots & \ldots & \ldots & \ldots \\ u_{n 1}^{j} & u_{n 2}^{j} & \ldots & u_{n n}^{j}\end{array}\right]\\\end{aligned}\)       (10)

Where uipj(i, p = 1, 2,...,n) represents the number of training data belonging to the class Ci being classified as a class Cp, by the classifier xj, and the data is classified correctly when i = p. Thus:

\(\begin{aligned}A_{i}^{j}=\frac{u_{i i}^{j}}{\sum_{p=1}^{n} u_{i p}^{j}}\\\end{aligned}\)       (11)

is interpreted as the degree to which the classifier xj identifies class Ci correctly. Then fuzzy densities can be defined as:

gij = Aij × e-Ev(Cij) (12)

In the case of Ev(C1j) = 0, there is no uncertainty, which indicates that the classifier xj has a clear conception of the class Ci, in other words, the classifier xj has the highest discriminability to class Ci, so the fuzzy density is the classification rate; In the case of Ev(C1j) = 1, there is the largest uncertainty, which indicates classifier xj has no ability to distinguish the class Ci, so the fuzzy density is the minimal.

After determining the fuzzy densities, the gλ-fuzzy measure and the fuzzy integral can be computed. The final output class from the combination classifier is the one with the highest integrated value.

5. Experimental Results

In this study, we used experimental data from a commercial bank and selected 1500 customer samples, of which 1279 are good (referred to as “good” customers), and 221 are bad (referred to as “bad” customers) due to poor conditions. The two categories, however, are not equal, which makes SVM ensemble training unsuitable. There must be similarities between the two categories for SVM ensemble training to be the right fit. The optimal separation hyperplane will favor the smaller density category (Figure 2) if we train the obtained sample directly, which will result in a significant sorting error. We must work with samples to balance the two groups to acquire a good performance sort. Different penalties that we set leads to various performances.

OTGHEU_2022_v9n8_89_f0002.png 이미지

Figure 2: Unequal Samples Classification

In this paper, we take the constantly approaching to gain Ci (i = 1, 2) method, which has two types of penalties for samples. It must comply with the following formula during the process:

\(\begin{aligned}\frac{1}{2}(W \bullet W)=\sum_{i=1}^{n} C_{i} \xi_{i}\\\end{aligned}\)       (13)

Where

\(\begin{aligned}C_{i}= \begin{cases} \begin{array}{lc}C_{1} & X_{i} \in \Xi_{1} \\ C_{2} & X_{i} \notin \Xi_{1}\end{array} \end{cases} \end{aligned}\)       (14)

And C1, C2 must satisfy

\(\begin{aligned}\frac{C_{1}}{C_{2}}=\frac{N_{2}}{N_{1}}\\\end{aligned}\)       (15)

So in this paper, \(\begin{aligned}\frac{C_{1}}{C_{2}}=6\\\end{aligned}\) ( Ξ1 means the good sample). And we get C1 = 200 and C2 =1200.

In this paper, we select half of each dataset randomly for training (‘good’: 639; ‘bad’: 110), and the residual samples (‘good’: 640; ‘bad’: 111) for testing the model’s forecasting accuracy.

For bagging, we re-sample randomly 70 samples with replacements from the original training data set. We independently train three component SVMs using various penalties across the three training data sets generated by bagging, and then we combine the three trained SVMs using a fuzzy integral. SVM uses a polynomial kernel function for each component:

K(x,xi) = [(x•xi) + c]q

K(x,xi) = exp(–| x - xi |2 / σ2)       (16)

And the corresponding parameters are selected by five-fold cross-validation. To avoid the tweak problem, ten experiments are performed, and the average performance in training and test datasets is reported in Table 3 and Table 4 respectively (‘0’ means ‘good’, and ‘1’ means ‘bad’). As Practical problems are Complex, this accuracy is acceptable for credit evaluation.

Table 3: Training Set Correct Rate

OTGHEU_2022_v9n8_89_t0002.png 이미지

Table 4: Test Set Correct Rate

OTGHEU_2022_v9n8_89_t0003.png 이미지

The comparison of single SVM, SVMs ensemble via majority voting, single neural network, and fuzzy neural network ensemble is shown in Table 5 and Figure 3. It proves that the proposed method is stable, highly accurate, robust, and feasible.

Table 5: Classification Accuracy Compared with Other Methods Under the Training Datasets

OTGHEU_2022_v9n8_89_t0004.png 이미지

OTGHEU_2022_v9n8_89_f0003.png 이미지

Figure 3: The Results of Classification Accuracy Comparison

This research proves the conclusion of recent studies’ suggestion that the classification accuracy of SVM is higher than the other methods under small samples training, and combining multiple classifiers (or classifier ensembles) should be better than single classifiers. Compared to a single SVM, SVM ensemble training classifiers achieve superior classification results. However, a single neural network classifier surpasses (diversified) multiple neural network classifiers in terms of performance. The main cause for this is that the divided training datasets may not be sufficient to create multiple neural network classifiers. The accuracy of SVMs ensembles is greater than that of neural network ensembles even though SVMs training classifiers perform better on small datasets.

6. Conclusion

Due to the growth of electronic commerce, more and more transactions are taking place online, hence trade security and credit evaluation are now becoming increasingly important. The issue of how to assess personal credit is becoming more and more significant. In this paper, we built a customer credit index system with 30 detailed indexes. We provide a fuzzy integral-based support vector machine ensemble technique for classification. The main benefit of this method is that in addition to combining the classification results, it also takes into account the relative weights of the different SVM classifier components. The experiment concludes by demonstrating the efficacy and efficiency of our method, and the proposed method is accurate, reliable, and robust.

References

  1. Abellan, J., & Castellano, J. G. (2017). A comparative study on base classifiers in ensemble methods for credit scoring. Expert Systems with Applications, 73, 1-10. https://doi.org/10.1016/j.eswa.2016.12.020
  2. Abid, L. (2022). A logistic regression model for credit risk of companies in the service sector. International Research in Economics and Finance, 6(2), 1. https://doi.org/10.20849/iref.v6i2.1179
  3. Bauer, E., & Kohavi, R. (1999). An empirical comparison of voting classification algorithms: Bagging, boosting, and variants. Machine Learning, 36(1), 105-139. https://doi.org/10.1023/A:1007515423169
  4. Isnurhadi, I., Adam, M., Sulastri, S., Andriana, I., & Muizzuddin, M. (2021). Bank capital, efficiency, and risk: Evidence from Islamic banks. Journal of Asian Finance, Economics, and Business, 8(1), 841-850. https://doi.org/10.13106/jafeb.2021.vol8.no1.841
  5. Jiang, H., Ching, W. K., Yiu, K. F. C., & Qiu, Y. (2018). Stationary Mahalanobis kernel SVM for credit risk evaluation. Applied Soft Computing, 71, 407-417. https://doi.org/10.1016/j.asoc.2018.07.005
  6. Khashman, A. (2010). Neural networks for credit risk evaluation: Investigation of different neural models and learning schemes. Expert Systems with Applications, 37(9), 6233-6239. https://doi.org/10.1016/j.eswa.2010.02.101
  7. Kwak, K. C., & Pedrycz, W. (2005). Face recognition: A study in information fusion using a fuzzy integral. Pattern Recognition Letters, 26(6), 719-733. https://doi.org/10.1016/j.patrec.2004.09.024
  8. Le, T. T. D., & Diep, T. T. (2020). The effect of lending structure concentration on credit risk: The evidence of Vietnamese commercial banks. Journal of Asian Finance, Economics, and Business, 7(7), 59-72. https://doi.org/10.13106/jafeb.2020.vol7.no7.059
  9. Leonard, A. C., & Villiers, C. (2000). The nature of the end-user relationship in the development of electronic commerce applications. SIGCPR '00: Proceedings of the 2000 ACM SIGCPR Conference on Computer Personnel Research, Illinois, USA, April 2000 (pp. 86-92). https://doi.org/10.1145/333334.333360
  10. Rizwan-ul-Hassan, Li, C., & Liu, Y. (2021). Online dynamic security assessment of wind integrated power system using SDAE with SVM ensemble boosting learner. International Journal of Electrical Power and Energy Systems, 125, 106429. https://doi.org/10.1016/j.ijepes.2020.106429
  11. Li, Z. (2018). GBDT-SVM credit risk assessment model and empirical analysis of peer-to-peer borrowers under consideration of audit information. Open Journal of Business and Management, 06(2), 362-372. https://doi.org/10.4236/ojbm.2018.62026
  12. Liu, W., Fan, H., & Xia, M. (2022). Credit scoring is based on tree-enhanced gradient boosting decision trees. Expert Systems with Applications, 189, 116034. https://doi.org/10.1016/j.eswa.2021.116034
  13. Manna, A. K., Cardenas-Barron, L. E., Dey, J. K., Mondal, S. K., Shaikh, A. A., Cespedes-Mota, A., & Trevino-Garza, G. (2022). A fuzzy imperfect production inventory model based on fuzzy differential and fuzzy integral method. Journal of Risk and Financial Management, 15(6), 239. https://doi.org/10.3390/jrfm15060239
  14. Moula, F. E., Guotai, C., & Abedin, M. Z. (2017). Credit default prediction modeling: An application of support vector machine. Risk Management, 19(2), 158-187. https://doi.org/10.1057/s41283-017-0016-x
  15. Naili, M., & Lahrichi, Y. (2022). The determinants of banks' credit risk: Review of the literature and future research agenda. International Journal of Finance and Economics, 27(1), 334-360. https://doi.org/10.1002/ijfe.2156
  16. Pham, H. N. (2021). How does internal control affect bank credit risk in Vietnam? A Bayesian analysis. Journal of Asian Finance, Economics, and Business, 8(1), 873-880. https://doi.org/10.13106/jafeb.2021.vol8.no1.873
  17. Pham, T. B. D. (2022). The impact of foreign ownership on the credit risk of commercial banks in Vietnam: Before the context of participation in the CPTPP. Journal of Asian Finance, Economics, and Business, 9(5), 305-311. https://doi.org/10.13106/jafeb.2021.vol8.no3.0771
  18. Platt, J. (1999). Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. Advances in Large Margin Classifiers, 10(3), 61-74.
  19. Shen, F., Ma, X., Li, Z., Xu, Z., & Cai, D. (2018). An extended intuitionistic fuzzy topsis method based on a new distance measure with an application to credit risk evaluation. Information Sciences, 428, 105-119. https://doi.org/10.1016/j.ins.2017.10.045
  20. Srinivasan, V., & Kim, Y. H. (1987). Credit granting: A comparative analysis of classification procedures. Journal of Finance, 42(3), 665-681. https://doi.org/10.1111/j.1540-6261.1987.tb04576.x
  21. Tsai, C. F., & Wu, J. W. (2008). Using neural network ensembles for bankruptcy prediction and credit scoring. Expert Systems with Applications, 34(4), 2639-2649. https://doi.org/10.1016/j.eswa.2007.05.019
  22. Yao, G., Hu, X., & Wang, G. (2022). A novel ensemble feature selection method by integrating multiple ranking information combined with an SVM ensemble model for enterprise credit risk prediction in the supply chain. Expert Systems with Applications, 200, 117002. https://doi.org/10.1016/j.eswa.2022.117002
  23. Yuan, Y., & Shaw, M. J. (1995). Induction of fuzzy decision trees. Fuzzy Sets and Systems, 69(2), 125-139. https://doi.org/10.1016/0165-0114(94)00229-Z
  24. Zhao, J., & Li, B. (2022). Credit risk assessment of small and medium-sized enterprises in supply chain finance based on SVM and BP neural network. Neural Computing and Applications, 56, 1-11