DOI QR코드

DOI QR Code

Financial Distress Prediction Using Adaboost and Bagging in Pakistan Stock Exchange

  • TUNIO, Fayaz Hussain (Center for China Fiscal Development, Central University of Finance and Economics) ;
  • DING, Yi (Center for China Fiscal Development: Central University of Finance and Economics) ;
  • AGHA, Amad Nabi (Department of Business & Health Management, Dow University of Health and Sciences) ;
  • AGHA, Kinza (Government Girls Lower Secondary School, Government of Sindh) ;
  • PANHWAR, Hafeez Ur Rehman Zubair (Indus Center for Sustainable Development)
  • Received : 2020.10.01
  • Accepted : 2020.12.14
  • Published : 2021.01.30

Abstract

Default has become an extreme concern in the current world due to the financial crisis. The previous prediction of companies' bankruptcy exhibits evidence of decision assistance for financial and regulatory bodies. Notwithstanding numerous advanced approaches, this area of study is not outmoded and requires additional research. The purpose of this research is to find the best classifier to detect a company's default risk and bankruptcy. This study used secondary data from the Pakistan Stock Exchange (PSX) and it is time-series data to examine the impact on the determinants. This research examined several different classifiers as per their competence to properly categorize default and non-default Pakistani companies listed on the PSX. Additionally, PSX has remained consistent for some years in terms of growth and has provided benefits to its stockholders. This paper utilizes machine learning techniques to predict financial distress in companies listed on the PSX. Our results indicate that most multi-stage mixture of classifiers provided noteworthy developments over the individual classifiers. This means that firms will have to work on the financial variables such as liquidity and profitability to not fall into the category of liquidation. Moreover, Adaptive Boosting (Adaboost) provides a significant boost in the performance of each classifier.

Keywords

1. Introduction

Due to the impact of the aftermath of defaulting on various sections of society and the significant misfortunes that the firms experienced at the time of global and major financial crisis, the inevitable signs of gauging the credit risk have culminated. There have been copious studies and research carried out related to bloated and unstable economies since the mid-1990s. One of the least studied emerging markets is the Pakistan Stock Exchange (PSX). However, the PSX study would contribute to the literature on emerging and developing markets finance, especially the Middle East. The enormous impact of long-term loans default by organizations has established that predicting credit risk is a vital area of an organization regarding credit support assessment methods.

Predicting credit risk and the effective management of credit risk is a critical component of comprehensive risk management and is essential for the long-term success of any organization. It confirmed through the literature review that much work has been done on this subject in previous periods. However, in some research, the alarming credit crisis has shown the importance of the criteria that need to be evaluated or studied. Furthermore, monetary reforms (for instance, Basel iii standards) and an efficient risk management system also indicated the necessity of the research in models for the prediction of default risk and bankruptcy. It is essential that for such institutions to have a transparent and efficient managing structure. As meaningful implementation of a logical decision-making process, corporate default forecast helps prevent and reduce the default risk. For the researchers, this point has highlighted the need to construct efficient structures or models with considerable precision. The mixture of classifiers exhibits an excellent procedure of utilizing information, which leads to greater precision than individual classifiers. This approach can accelerate model precision. The greater the model accuracy, the more suitable the method is in predicting default. Research in this regard has just started, therefore, needs to be carried out comprehensively. Formerly, the researchers used Decision Tree (DT) or Neural Networks (NN) as a primary base for learning, comparable to single NN classifiers. The paper also analyzes the Adaboost and Bagging approach to predict default with the different models like DT, Logistic Regression (LR), Artificial Neural Networks (ANN), and Support Vector Machine (SVM).

2. Literature Review

Previously, several types of research have been carried out concerning methods of default prediction. A single variable approach was implemented by Beaver (1966), and the preferable method was Discriminant Analysis (DA) by Altman (1968). Since then, much effort has been made to spread Altman’s outcomes, using various techniques. The application of data mining methods such as (DT), (ANN), (SVM) for insolvency prediction began in the early 1980s (Pompe & Feelders, 1997; Shin et al., 2005).

In recent decades, various machine learning models were used to predict default and financial distress. The classification of firms based on firm and country-level variables using the DT technique was initially done by Frydman et al. (1985). DT procedure contains the certainty of non-systematic errors in the established standards of characteristics (Quinlan, 1986). Quinlan (1986) summarized an approach to synthesizing decision trees that have been used in a variety of systems, and it describes one such system, ID3. ID3 was designed where there are several attributes and the training set comprises many objects, nevertheless where a reasonably good decision tree is needed without much computation. This approach is utilized in various studies to predict credit risk, (Pompe & Feelders, 1997; Messier & Hansen, 1988).

Lin and McClean (2001) utilized four different approaches to predict business failure - DA, LR, NN, and C5.0 - each based on two feature selection methods for predicting business failure. Two were statistical approaches, and the other two were machine learning approaches. Eventually, a hybrid method that combines the finest features of different classification models is developed to increase the prediction performance. The empirical tests showed that the hybrid method generates higher prediction accuracy than individual classifiers (Ramakrishnan et al., 2016). Techniques such as genetic algorithms and ANN were implemented in different but relevant studies to examine bankruptcy (Shin & Lee, 2002; Hertz, 2018). Lately, the ANN approach has been opted for by some of the significant commercial credit-default predicting institutions (Nabi et al., 2020).

For instance, Dang (2020) said that banks and several financial institutions had created this structure for predicting default, particularly Moody’s-Governance risk, which is a key determinant of credit quality for all debt issuers. The prediction of company bankruptcies is a significant and widely studied subject since it can have a significant impact on bank lending decisions and profitability. Atiya (2001) reviewed the subject of bankruptcy prediction, with stress on NN models. He developed an NN bankruptcy prediction model. Influenced by one of the traditional credit risk models, a model developed by Merton (1974), he proposed novel indicators for the NN system. Al-Homaidi et al. (2020) found that the combination of SVM and LR exhibited considerable stability in the prediction results, which is vital and significant in the banks’ credit rating process.

Van Gestel et al. (2005) stated that the Basel II capital accord urges banks to develop internal rating models that are financially instinctual, easily understandable, and optimally predictive for default. Standard linear logistic models are very easily readable, however, have limited model flexibility. ANN and SVM models are less uncomplicated to interpret, however, can capture more complex multivariate non-linear relations. A gradual approach that balances the interpretability and predictability requirements is applied to rate banks

Ha (2019) analyzed and synthesized the theoretical basis relating to operational self-sustainability and credit growth of PCFs. Based on the synthesized and analyzed theories, the paper defined the factors affecting operational self-sustainability and credit growth, the analysis model of the interactive relationship between credit growth and operational self-sustainability of people’s credit funds in Vietnam’s Mekong Delta Region. Myers and Forgy (1963) implemented a multi-variant scheme is to envisage a two-stage DA structure. Several discriminants and multiple regression analyses were performed on retail credit application data to develop a numerical scoring system for predicting credit risk in a finance company. Results showed that equal weights for all significantly predictive items were as effective as weights from the more sophisticated techniques of DA and step-wise multiple regression.” However, a variation of the basic DA produced a better separation of groups at the lower score levels, where more potential losses could be eliminated with a minimum cost of potentially good accounts.

West et al. (2005) investigated 3 ensemble strategies: cross-validation, Bagging, and boosting. They employed the multilayer perceptron NN as a base classifier. The generalization capability of the NN ensemble was found to be superior to the single best model for three real-world financial decision applications. It was evident from the work of Abellán and Masegosa (2012) that classification can be carried out with ease through bagging on creedal judgment trees (CDTs). They studied one application of Bagging CDT, using imprecise probabilities and uncertainty measures, on data sets with class noise. They also extended an original method that builds CDTs to one which works with continuous features and missing data. Through an experimental study, they proved that Bagging CDTs outperforms more complex Bagging approaches on data sets with class noise. Finally, using a bias-variance error decomposition analysis, they justified the performance of the method of Bagging CDTs, showing that it achieves a stronger reduction of the variance error component.

3. Methods

3.1. Adaptive Boosting (Adaboost)

The first step of the classifier procedure is to list the individual classifiers and then combine them. In a group of N independent classifiers with un-reciprocal error areas, the classifier’s mistake derived by regularizing their outcomes can be decreased by a factor of N. Freund and Schapire (1995) suggested the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. Their study was an extension of the well-studied on-line prediction model to a general decision-theoretic setting. They showed that the multiplicative weight update rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems.

Every new classifier of this kind is constructed on a set of information in which the preceding model mismatched the samples, giving more stress. In contrast, samples that are adequately categorized are applied to low pressure. Once a possible classifier has been identified, it is necessary to measure its accuracy. The AdaBoost algorithm of Freund and Schapire (1997) was the first practical boosting algorithm, and remains one of the most widely used and studied, with applications in numerous fields. Adaboost comprises multiple weak regressors, sequentially trained weighting the training samples based on the errors of the previous weak regressors. The single predictions are combined with a weighted sum to obtain the final estimation. A set of training is assumed by:

\(T_n = \{(X_1, Y_1), (X_2, Y_2), \dots , (X_n,Y_n)\}\) 

where each item Xi has an associated class Yi {-1,1}. The weight ωb(i) is allocated to every annotation Xi and is firstly set to 1/n. This assessment must be rationalized before every phase. A simple classifier signified Cb(Xi) is constructed on this new training group, Tb, and is useful to every training model. The mistake of this classifier is characterized \(\xi_{\mathrm{b}}\) by and is considered as:

\(\xi_{\mathrm{b}}=\sum \omega_{\mathrm{b}}(\mathrm{i}) \xi_{\mathrm{b}}(\mathrm{i})\) 

Where

\(\begin{aligned} \xi \mathrm{b}(\mathrm{i})=\left\{\begin{array}{l} 0 \\ \mathrm{y}_{\mathrm{i}} \end{array}\right.& \mathrm{C}_{\mathrm{b}}\left(\mathrm{X}_{\mathrm{i}}\right)=\\ & \frac{1}{\mathrm{y}_{\mathrm{i}}} \mathrm{C}_{\mathrm{b}}\left(\mathrm{X}_{\mathrm{i}}\right) \neq \end{aligned}\) 

The new weight for the (b+1)-th repetition will be,

\(\omega_{b+1}(i)=\omega_{b}(i) \cdot \exp \left(\alpha_{b} \xi_{b}(i)\right)\) 

Here αi is a constant of the classifier in the b-th iteration. This procedure is repeated in each phase for b=1, 2, 3, B. Lastly, the collective classifier is constructed as a linear grouping of the individual classifiers weighted by the consistent, constant αb. Figure 1 shows the step-by-step framework of the Adaboost algorithm, the combination system, and the weak learning algorithm for default forestalling.

3.2. Bagging

Bagging is also meta-algorithm that pools decisions from multiple classifiers. We train k models on the different samples (data splits) and take average forecasts in bagging. Then, we forecast the experiment set by averaging the outcomes of k models. The bagging algorithm can be described as follow:

•  Training

In every iteration t, t=1,...T

• Arbitrarily trial with additional N samples from the training set

• “base model” (e.g., neural network, decision tree) on the samples.

For every test instance

• Forecast by merging outcomes of all T trained models:

•  Regression: averaging

•  Classification: a mainstream vote

3.3. Logistic Regression (LR)

Logistic regression is a statistical model that in its basic form uses a logistic function to model a binary dependent variable. In regression analysis, logistic regression is estimating the parameters of a logistic model. The variable expresses a financial state where it takes the value 1 if the specific firm in the particular period is in financial distress and value 0 if is characterized by financial stability (Allison 1999; Hosmer Jr et al., 2013). This approach was primarily implemented by Martin (1977) and Morris (2018) to predict corporate default, particularly in the US banking sector. Shisia et al. (2014) conducted a study to predict financial distress in Nairobi security exchange using LR. They used the Multivariate DA (MDA) statistical technique as used by Altman (2006) in predicting corporate financial distress to determine the company growth and the state in which the company occurs as recommended by the Altman model in which there are safe zone, grey zone, and distress zone. The study sourced data from secondary sources. Mihalovic (2016) presented a paper focused on the comparison of the overall prediction performance of the 2 developed models. The first one is estimated through MDA, while the other is based on LR. The results of the study suggest that the model based on LR outperforms the classification accuracy of the MDA model. LR stands in the second position, after MDA, in predicting default structures as per Dimitras et al. (1996).

Figure 1: The framework of the Adaboost algorithm

3.4. Decision Tree (DT)

Among the most renowned and practical approaches for prediction is the DT. It is mainly because of the clearness and lucidity of this approach. It is a non-parametric and preliminary approach used for classification and regression. The objective is to construct a model that predicts the value of a target variable by learning simple decision rules implied from the data characteristics. DT was initially implemented to predict default risk (Frydman et al., 1985). Some scholars, after this, implemented this approach to predict the chances of default or bankruptcy (Gepp et al., 2010; Carter & Catlett, 1987; Messier & Hansen, 1988; Pompe & Feelders, 1997).

3.5. Neural Networks (NN)

To overcome and resolve the regression difficulties, the NNs approach was opted; applying a NN to the problem can provide much more prediction power compared to a traditional regression. In the early 1990s, corporate default was predicted using the NNs approach; after then, many researchers have utilized this structure to predict failure. Moreover, it is evident from various studies that several financial institutions have already implemented NNs for default risk prediction (Atiya, 2001). The approach is flexible to data characteristics that can be handled with multiple challenging tasks and parameters. Therefore, NN possesses the competence to tackle erroneous and insufficient information (Smith & Stulz, 1985).

Figure 2: The SVM Displays a Hyperplane that Parts the Two Classes Accurately

3.6. Support Vector Machines (SVM)

SVMs are assessed as the most acceptable classification methods among different classification techniques. The number of empirical findings exhibits the properties and significance of SVMs. SVMs perform classification by finding the hyperplane that maximizes the margin between the two classes. The vectors that define the hyperplane are SVMs. SVM creates a linear model to guess the decision purpose using nonlinear class limits based on support vectors. The objective of applying SVMs is to find the best line in two dimensions or the best hyperplane in more than two dimensions to separate space into classes.

The SVM algorithm finds the hyperplane with the largest margin: the largest distance to the nearest sample points. Hence its denotation as the maximum-margin classifier. The closest data points are called support vectors.

4. Empirical Experiment

4.1. Data Description

The dataset was utilized to edit. A total of 217 cases of Pakistani businesses were observed. Most of them were or are listed in PSX.

The 21 significant variables in this study were selected using a two phase’s predictive variable assortment procedure. After studying the default prediction literature, 65 variables from more than 230 financial ratios were designated as projecting variables. The financial ratios were selected based on their significance in the literature. For the second phase, 21 factors are assigned based on the obtainability of the imperative data. Table 1 exhibits the precise gauging significance of organizations who default and those who do not default.

Shortlisting the factors, methods having direct regression and DT analysis has opted. The significant factors based on two approaches were recognized. The factors are chosen from the 21 factors for the structure, which could accurately pre-judge the firms’ default and non-default. The selected financial ratios are EBIT to total assets (X1), current assets to total assets (X5), net profit to liability (X11), working capital to total assets (X6), and net profit to the sale (X16).

Where:

EBIT: Earnings Before GP: Gross Profit

TA: Total assets L: Liabilities

Ca: Cash TI: Total Income

CL: Current Liabilities LL: Long Term Liabilities

CA: Current Assets E: Equity

WC: Working Capital R: Receivables

S: Sale NP: Net Profit

Table 1: Concise Commutations of Shortlisted Factors of Default and Non-Default Organizations 

Table 2: Depicting the Progress of Classifier Methods

4.2. Experimental Results

The outcomes of the result are presented in two portions. The first portion exhibits the number of misjudged cases in percentage for every individual classifier. The progress of collective classifiers over the baseline classifiers has been exhibited in Table 2. Also, using the ROC curve, the accuracy of each classifier is evaluated. Table 2 displays the precision of the classifier and misclassified cases for each classifier. As indicated by the outcomes, SVM is the best. The variance between SVM and the next best model is minor but statistically significant.

Generally, the baseline classifiers’ findings are not unexpected and well-matched with preceding experiential research on individual classifier performance for prediction of default risk. With greater generalizability, it is evident that SVM is an appropriate approach for Pakistan’s financial distress as a developing economy. Also, Table 2 shows the performance accuracy of multi-stage classifiers in comparison with baseline classifiers.

Figure 3: Performances of Adaboost and Bagging

The multi-stage classifiers considerably outperform the baseline. However, the bagging improvement is not significant, with only companies using LR viewing the major enhancement. The results show that all multi-stage classifiers outperform the baseline classifiers including Adaboost with LR and NNs, DT, and SVM. Figure 1 shows the ROC curve for baseline and multi-stage classifiers. Table 2 depicts the progress of classifier methods.

5. Conclusion

Due to the importance of default and its impact on organizations, default prediction has received extensive attention from researchers. Accurate default prediction of firms moving toward bankruptcy is undeniably required. Various models have been used for forecasting default. The use of collective classifiers has developed and became standard on numerous grounds in the last few years. As indicated by different studies, various individual classifiers make mistakes in diverse cases in predicting default (Polikar, 2006: Rokach, 2009). The assortment is relied upon to improve order exactness. According to Brown et al. (2005) stated that ensemble approaches to classification and regression have attracted a great deal of interest in recent years. These methods can be shown both theoretically and empirically to outperform single predictors on a wide range of tasks. Rokach (2009) suggested the idea of ensemble methodology is to build a predictive model by integrating multiple models. It is well-known that ensemble methods can be used for improving prediction performance. They provided an overview of ensemble methods in classification tasks and presented all important types of ensemble methods including bagging and boosting.

This study focuses on corporate default prediction; the approach is differentiated by employing multi-stage classifiers for Pakistani firms as a developing emerging economy. The accuracy of five classifiers was assessed to determine whether it is conceivable to forecast Pakistani firms’ default based on financial ratios. Empirical results highlighted the financial ratios EBIT/ TA, CA/TA, NP/L, WC/TA, and NP/S is recognized as highly predictive indicators. Most mixed classifiers gave critical advancements in respect of individual classifiers. Moreover, Adaboost affords enhancement in the progress of the particular classifier. Bagging with SVMs and LR performed well. According to the literature in various scientific and engineering fields, it has been found that the ensemble of classifiers increases the forecasting accuracy. The results of this study reveal the progress in forecast precision of collective classifiers. Besides, this study used collective classifiers for default prediction to manage the cost of a more dependable model.

References

  1. Abellan, J., & Masegosa, A. (2012). Bagging schemes on the presence of class noise in classification. Expert Systems with Applications, 39(8), 6827-6837. https://doi.org/10.1016/j.eswa.2012.01.013
  2. Al-Homaidi, E., Tabash, M., Al-Ahdal, W., Farhan, N., & Khan, S. (2020). The liquidity of Indian firms: empirical evidence of 2154 firms. The Journal of Asian Finance, Economics, and Business, 7(1), 19-27. https://doi.org/10.13106/jafeb.2020.vol7.no1.19
  3. Allison, P. D. (2012). Logistic regression using SAS: Theory and application. Cary, NC: SAS Institute.
  4. Altman, E. I. (1968). Financial ratios, discriminant analysis, and the prediction of corporate bankruptcy. The Journal of Finance, 23(4), 589-609. https://doi.org/10.1111/j.1540-6261.1968.tb00843.x
  5. Atiya, A. (2001). Bankruptcy prediction for credit risk using neural networks: A survey and new results. IEEE Transactions on Neural Networks, 12(4), 929-935. https://doi.org/10.1109/72.935101
  6. Beaver, W. (1966). Financial ratios as predictors of failure. Journal of Accounting Research, 4, 71. https://doi.org/10.2307/2490171
  7. Brown, G., Wyatt, J., Harris, R., & Yao, X. (2005). Diversity creation methods: A survey and categorization. Information Fusion, 6(1), 5-20. https://doi.org/10.1016/j.inffus.2004.04.004
  8. Carter, C., & Catlett, J. (1987). Assessing credit card applications using machine learning. IEEE Expert, 2(3), 71-79. https://doi.org/10.1109/mex.1987.4307093
  9. Dang, H. T. (2020). Determinants of liquidity of listed enterprises: Evidence from Vietnam. The Journal of Asian Finance, Economics, and Business, 7(11), 67-73. https://doi.org/10.13106/jafeb.2020.vol7.no11.067
  10. Dimitras, A., Zanakis, S., & Zopounidis, C. (1996). A survey of business failures with an emphasis on prediction methods and industrial applications. European Journal of Operational Research, 90(3), 487-513. https://doi.org/10.1016/0377-2217(95)00070-4
  11. Freund, Y., & Schapire, R. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119-139. https://doi.org/10.1006/jcss.1997.1504
  12. Frydman, H., Altman, E., & Kao, D. (1985). Introducing recursive partitioning for financial classification: The case of financial distress. The Journal of Finance, 40(1), 269-291. https://doi.org/10.1111/j.1540-6261.1985.tb04949.x
  13. Gepp, A., Kumar, K., & Bhattacharya, S. (2009). Business failure prediction using decision trees. Journal of Forecasting, 29(6), 536-555. https://doi.org/10.1002/for.1153
  14. Ha, D. V. (2019). The interactive relationship between credit growth and operational self-sustainability of people's credit funds in the Mekong Delta Region of Vietnam. The Journal of Asian Finance, Economics, and Business, 6(3), 55-65. https://doi.org/10.13106/jafeb.2019.vol6.no3.55
  15. Hertz, J. A. (2018). Introduction to the theory of neural computation. Boca Raton, Florida, USA: CRC Press.
  16. Hosmer Jr, D. W., Lemeshow, S., & Sturdivant, R. X. (2013). Applied logistic regression. Hoboken, NJ: John Wiley & Sons,
  17. Lin, F., & McClean, S. (2001). A data mining approach to the prediction of corporate failure. Knowledge-Based Systems, 14(3-4), 189-195. https://doi.org/10.1016/s0950-7051(01)00096-x
  18. Lin, Y. (2002). Improvement of behavior scores by the dual-model scoring system. International Journal of Information Technology & Decision Making, 01(01), 153-164. https://doi.org/10.1142/s0219622002000105
  19. Martin, D. (1977). Early warning of bank failure. Journal of Banking & Finance, 1(3), 249-276. https://doi.org/10.1016/0378-4266(77)90022-x
  20. Merton, R. C. (1974). On the pricing of corporate debt: The risks structure of interest rates. The Journal of Finance, 29(2), 449-470. https://doi.org/10.1111/j.1540-6261.1974.tb03058.x
  21. Messier, W., & Hansen, J. (1988). Inducing rules for expert system development: An example using default and bankruptcy data. Management Science, 34(12), 1403-1415. https://doi.org/10.1287/mnsc.34.12.1403
  22. Mihalovic, M. (2016). Performance comparison of Multiple Discriminant Analysis (MDA) and Logit models in bankruptcy prediction. Economics and Sociology, 9(4), 101-118. https://doi.org/10.14254/2071-789X.2016/9-4/6
  23. Morris R (2018). Early warning indicators of corporate failure: A critical review of previous research and further empirical evidence. London, UK: Routledge,
  24. Myers, J., & Forgy, E. (1963). The development of numerical credit evaluation systems. Journal of the American Statistical Association, 58(303), 799-806. https://doi.org/10.1080/01621459.1963.10500889
  25. Nabi, A., Shahid, Z., Mubashir, K., Ali, A., Iqbal, A., & Zaman, K. (2020). Relationship between population growth, price level, poverty incidence, and carbon emissions in a panel of 98 countries. Environmental Science and Pollution Research, 27(25), 31778-31792. https://doi.org/10.1007/s11356-020-08465-1
  26. Polikar, R. (2006). Ensemble-based systems in decision making. IEEE Circuits and Systems Magazine, 6(3), 21-45. https://doi.org/10.1109/mcas.2006.1688199
  27. Pompe P., & Feelders, A. (1997). Using machine learning, neural networks, and statistics to predict corporate bankruptcy. Computer‐Aided Civil and Infrastructure Engineering, 12(4), 267-276. https://doi.org/10.1111/0885-9507.00062
  28. Quinlan, J. (1986). Induction of decision trees. Machine Learning, 1(1), 81-106. https://doi.org/10.1007/bf00116251
  29. Ramakrishnan, S., Hishan, S., Nabi, A., Arshad, Z., Kanjanapathy, M., Zaman, K., & Khan, F. (2016). An interactive environmental model for economic growth: evidence from a panel of countries. Environmental Science and Pollution Research, 23(14), 14567-14579. https://doi.org/10.1007/s11356-016-6647-8
  30. Rokach, L. (2009) Ensemble methods in supervised learning: Data mining and knowledge discovery handbook. Berlin, Germany: Springer.
  31. Shin, K., & Lee, Y. (2002). A genetic algorithm application in bankruptcy prediction modeling. Expert Systems with Applications, 23(3), 321-328. https://doi.org/10.1016/s0957-4174(02)00051-9
  32. Shin, K., Lee, T., & Kim, H. (2005). An application of support vector machines in the bankruptcy prediction model. Expert Systems with Applications, 28(1), 127-135. https://doi.org/10.1016/j.eswa.2004.08.009
  33. Shisia, A., Sang, W., Waitindi, S., & Okibo, W. (2014). An in-depth analysis of the Altman's failure prediction model on corporate financial distress in Uchumi supermarket in Kenya. European Journal of Business and Management, 6, 27-41. https://iiste.org/Journals/index.php/EJBM/article/view/14767/15261
  34. Smith, C., & Stulz, R. (1985). The determinants of firms' hedging policies. The Journal of Financial and Quantitative Analysis, 20(4), 391. https://doi.org/10.2307/2330757
  35. Thomas Ng, S., Wong, J., & Zhang, J. (2011). Applying the Z-score model to distinguish insolvent construction companies in China. Habitat International, 35(4), 599-607. https://doi.org/10.1016/j.habitatint.2011.03.008
  36. Van Gestel, T., Baesens, B., Van Dijcke, P., Suykens, J., & Garcia, J. (2005). Linear and non-linear credit scoring by combining logistic regression and support vector machines. The Journal of Credit Risk, 1(4), 31-60. https://doi.org/10.21314/jcr.2005.025
  37. West, D., Dellana, S., & Qian, J. (2005). Neural network ensemble strategies for financial decision applications. Computers & Operations Research, 32(10), 2543-2559. https://doi:10.1016/S0305-0548(04)00069-3

Cited by

  1. Factors Affecting Job Performance: A Case Study of Academic Staff in Pakistan vol.8, pp.5, 2021, https://doi.org/10.13106/jafeb.2021.vol8.no5.0473