Browse > Article
http://dx.doi.org/10.13088/jiis.2022.28.2.307

Domain Knowledge Incorporated Counterfactual Example-Based Explanation for Bankruptcy Prediction Model  

Cho, Soo Hyun (Department of Big Data Analytics, Ewha Womans University)
Shin, Kyung-shik (School of Business, Ewha Womans University)
Publication Information
Journal of Intelligence and Information Systems / v.28, no.2, 2022 , pp. 307-332 More about this Journal
Abstract
One of the most intensively conducted research areas in business application study is a bankruptcy prediction model, a representative classification problem related to loan lending, investment decision making, and profitability to financial institutions. Many research demonstrated outstanding performance for bankruptcy prediction models using artificial intelligence techniques. However, since most machine learning algorithms are "black-box," AI has been identified as a prominent research topic for providing users with an explanation. Although there are many different approaches for explanations, this study focuses on explaining a bankruptcy prediction model using a counterfactual example. Users can obtain desired output from the model by using a counterfactual-based explanation, which provides an alternative case. This study introduces a counterfactual generation technique based on a genetic algorithm (GA) that leverages both domain knowledge (i.e., causal feasibility) and feature importance from a black-box model along with other critical counterfactual variables, including proximity, distribution, and sparsity. The proposed method was evaluated quantitatively and qualitatively to measure the quality and the validity.
Keywords
Bankruptcy Prediction; Counterfactual; Local Explanation; XAI;
Citations & Related Records
Times Cited By KSCI : 2  (Citation Analysis)
연도 인용수 순위
1 Stepin, I., Alonso, J. M., & Catala, A. (2021). A Survey of Contrastive and Counterfactual Explanation Generation Methods for Explainable Artificial Intelligence. IEEE Access, 9. https://doi.org/10.1109/ACCESS.2021.3051315   DOI
2 Dandl, S., Molnar, C., Binder, M., & Bischl, B. (2020). Multi-Objective Counterfactual Explanations. Parallel Problem Solving from Nature - PPSN XVI. PPSN 2020. Lecture Notes in Computer Science, 448-469. https://doi.org/10.1007/978-3-030-58112-1   DOI
3 Davidson, W. (2019). Financial Statement Analysis Basis For Management Advice. Association of International Certified Professional Accountants, Inc.
4 Gadanecz, B., & Jayaram, K. (2008). Measures of financial stability - a review. IFC Conference on "Measuring Financial Innovation and Its Impact," 365-380.
5 Gedikli, F., Jannach, D., & Ge, M. (2014). How should i explain? A comparison of different explanation types for recommender systems. International Journal of Human Computer Studies, 72(4), 367-382. https://doi.org/10.1016/j.ijhcs.2013.12.007   DOI
6 Adhikari, A., Tax, D. M. J., Satta, R., & Faeth, M. (2019). LEAFAGE: Example-based and Feature importance-based Explanations for Black-box ML models. 2019 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE). https://doi.org/10.1109/FUZZ-IEEE.2019.8858846   DOI
7 Byrne, R. M. J. (2019). Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. International Joint Conference on Artificial Intelligence (IJCAI-19), 6276-6282.
8 Bank of Korea. (2020). Financial Statement Analysis for 2019. In Bank of Korea.
9 Belkoura, S., Zanin, M., & Latorre, A. (2019). Fostering interpretability of data mining models through data perturbation. Expert Systems With Applications, 137, 191-201. https://doi.org/10.1016/j.eswa.2019.07.001   DOI
10 Binns, R., van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018). "It's reducing a human being to a percentage"; perceptions of justice in algorithmic decisions. Conference on Human Factors in Computing Systems - Proceedings, 2018-April. https://doi.org/10.1145/3173574.3173951   DOI
11 Fernandez, R. R., Diego, I. M. de, Acena, V., Fernandez-isabel, A., & Moguerza, J. M. (2020). Random forest explainability using counterfactual sets. Information Fusion, 63, 196-207. https://doi.org/10.1016/j.inffus.2020.07.001   DOI
12 Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F., & Giannotti, F. (2018). Local rule-based explanations of black box decision systems. In arXiv (Issue May).
13 Mothilal, R. K., Sharma, A., & Tan, C. (2020). Explaining machine learning classifiers through diverse counterfactual explanations. Conference on Fairness, Accountability, and Transparency, 607-617. https://doi.org/10.1145/3351095.3372850   DOI
14 Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks. Applied Sciences, 12(3). https://doi.org/10.3390/app12031353   DOI
15 Kenny, E. M., Ford, C., Quinn, M., & Keane, M. T. (2021). explanations-by-example : The effect of explanations and error-rates in XAI user studies . Artificial Intelligence, 294, 103459. https://doi.org/10.1016/j.artint.2021.103459   DOI
16 Melanie, M. (1999). An Introduction to Genetic Algorithms. A Bradford Book The MIT Press.
17 Goyal, Y., Wu, Z., Ernst, J., Batra, D., Parikh, D., & Lee, S. (2019). Counterfactual visual explanations. 36th International Conference on Machine Learning, ICML 2019, 2019-June, 4254-4262.
18 Ribeiro, M. T., Singh, S., & Guestrin, C. (2018). Anchors : High-Precision Model-Agnostic Explanations. The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 1527-1535.
19 Sharma, P., Wadhwa, A., & Komal, K. (2014). Analysis of Selection Schemes for Solving an Optimization Problem in Genetic Algorithm. International Journal of Computer Applications, 93(11), 1-3. https://doi.org/10.5120/16256-5714   DOI
20 Wachter, S., Mittelstadt, B., & Russell, C. (2018). COUNTERFACTUAL EXPLANATIONS WITHOUT OPENING THE BLACK BOX : AUTOMATED DECISIONS AND THE GDPR. Harvard Journal of Law & Technology, 31(2), 842-887.
21 Kanamori, K., Takagi, T., Kobayashi, K., & Arimura, H. (2020). DACE: Distribution-aware counterfactual explanation by mixed-integer linear optimization. IJCAI International Joint Conference on Artificial Intelligence, 2855-2862. https://doi.org/10.24963/ijcai.2020/395   DOI
22 Guidotti, R., Monreale, A., Ruggieri, S., Giannotti, F., Pedreschi, D., & Turini, F. (2019). Factual and Counterfactual Explanations for Black Box Decision Making. IEEE Intelligent Systems, November/December, 14-23.
23 Hashemi, M., & Fathi, A. (2020). PermuteAttack: Counterfactual explanation of machine learning credit scorecards. ArXiv.
24 Helfert, E. A. (2001). Financial Analysis Tools and Techniques: A Guide for Managers. McGrawHill Education.
25 Mahajan, D., Tan, C., & Sharma, A. (2019). Preserving causal constraints in counterfactual explanations for machine learning classifiers. 33rd Conferenceon Neural Information Processing Systems.
26 Keane, M. T., & Smyth, B. (2020). Good Counterfactuals and Where to Find Them: A Case-Based Technique for Generating Counterfactuals for Explainable AI (XAI). Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12311 LNAI, 163-178. https://doi.org/10.1007/978-3-030-58342-2_11   DOI
27 Kenny, E. M., & Keane, M. T. (2021a). Explaining Deep Learning using examples: Optimal feature weighting methods for twin systems using post-hoc, explanation-by-example in XAI. Knowledge-Based Systems, 233, 107530. https://doi.org/10.1016/j.knosys.2021.107530   DOI
28 Le, T., Wang, S., & Lee, D. (2020). GRACE : Generating Concise and Informative Contrastive Sample to Explain Neural Network Model ' s Prediction. KDD '20: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 238-248. https://doi.org/https://doi.org/10.1145/3394486.3403066   DOI
29 Kenny, E. M., & Keane, M. T. (2021b). On Generating Plausible Counterfactual and Semi-Factual Explanations for Deep Learning. The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 11575-11585. http://arxiv.org/abs/2009.06399
30 Lundberg, S. M., & Lee, S. (2017). A Unified Approach to Interpreting Model Predictions. 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA, Section 2, 1-10.
31 Miller, T. (2019). Explanation in artificial intelligence : Insights from the social sciences. Artificial Intelligence, 267, 1-38. https://doi.org/10.1016/j.artint.2018.07.007   DOI
32 Poyiadzi, R., Sokol, K., Santos-rodriguez, R., Bie, T. de, & Flach, P. (2020). FACE : Feasible and Actionable Counterfactual Explanations. AAAI/ACM Conference on AI, Ethics, and Society (AIES). https://doi.org/https://doi.org/10.1145/3375627.3375850 1
33 Ribeiro, M. T., & Guestrin, C. (2016). " Why Should I Trust You ?" Explaining the Predictions of Any Classifier. KDD 2016 San Francisco, CA, USA. https://doi.org/http://dx.doi.org/10.1145/2939672.2939778   DOI
34 Grath, R. M., Costabello, L., le Van, C., Sweeney, P., Kamiab, F., Shen, Z., & Lecue, F. (2018). Interpretable credit application predictions with counterfactual explanations. NIPS 2018 Workshop on Challenges and Opportunities for AI InFinancial Services: The Impact of Fairness, Explainability, Accuracy, and Privacy.
35 Russell, C. (2019). Efficient search for diverse coherent explanations. Conference on Fairness, Accountability, and Transparency, January, 20-28. https://doi.org/10.1145/3287560.3287569   DOI
36 Schneider, C. Q., & Rohlfing, I. (2016). Case Studies Nested in Fuzzy-set QCA on Sufficiency: Formalizing Case Selection and Causal Inference. Sociological Methods and Research, 45(3), 526-568. https://doi.org/10.1177/0049124114532446   DOI
37 Waa, J. van der, Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI : A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404. https://doi.org/10.1016/j.artint.2020.103404   DOI
38 Mittelstadt, B., Russell, C., & Wachter, S. (2019). Explaining explanations in AI. FAT* 2019 - Proceedings of the 2019 Conference on Fairness, Accountability, and Transparency, 279-288. https://doi.org/10.1145/3287560.3287574   DOI