Browse > Article
http://dx.doi.org/10.7236/IJIBC.2022.1.142

A Study on Explainable Artificial Intelligence-based Sentimental Analysis System Model  

Song, Mi-Hwa (School of Smart IT, Semyung University)
Publication Information
International Journal of Internet, Broadcasting and Communication / v.14, no.1, 2022 , pp. 142-151 More about this Journal
Abstract
In this paper, a model combined with explanatory artificial intelligence (xAI) models was presented to secure the reliability of machine learning-based sentiment analysis and prediction. The applicability of the proposed model was tested and described using the IMDB dataset. This approach has an advantage in that it can explain how the data affects the prediction results of the model from various perspectives. In various applications of sentiment analysis such as recommendation system, emotion analysis through facial expression recognition, and opinion analysis, it is possible to gain trust from users of the system by presenting more specific and evidence-based analysis results to users.
Keywords
Explainable Artificial Intelligence; LIME; Sentimental Analysis; Model Interpretability;
Citations & Related Records
연도 인용수 순위
  • Reference
1 M. Van Lent, W. Fisher, and M. Mancuso, "An explainable artificial intelligence system for small-unit tactical behavior," in Proceedings of the national conference on artificial intelligence, 2004: Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, pp. 900-907.
2 E. H. Shortliffe and B. G. Buchanan, "A model of inexact reasoning in medicine," Mathematical Biosciences, vol. 23, no. 3, pp. 351-379, 1975/04/01/ 1975, doi: https://doi.org/10.1016/0025-5564(75)90047-4.   DOI
3 S. M. Mathews, "Explainable artificial intelligence applications in NLP, biomedical, and malware classification: a literature review," in Intelligent computing-proceedings of the computing conference, 2019: Springer, pp. 1269-1292.
4 D. Gunning, M. Stefik, J. Choi, T. Miller, S. Stumpf, and G.-Z. Yang, "XAI-Explainable artificial intelligence," Science Robotics, vol. 4, no. 37, p. eaay7120, 2019.   DOI
5 L. Edwards and M. Veale, "Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for," Duke L. & Tech. Rev., vol. 16, p. 18, 2017.
6 M. D. Zeiler and R. Fergus, "Visualizing and understanding convolutional networks," in European conference on computer vision, 2014: Springer, pp. 818-833.
7 L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, "Explaining Explanations: An Overview of Interpretability of Machine Learning," presented at the The 5th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2018), 2018/05/31, 2018. [Online]. Available: https://arxiv.org/pdf/1806.00069.
8 D. Dave, H. Naik, S. Singhal, and P. Patel, "Explainable ai meets healthcare: A study on heart disease dataset," arXiv preprint arXiv:2011.03195, 2020.
9 C. Gan, N. Wang, Y. Yang, D.-Y. Yeung, and A. G. Hauptmann, "Devnet: A deep event network for multimedia event detection and evidence recounting," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2015, pp. 2568-2577.
10 M. T. Ribeiro, S. Singh, and C. Guestrin, ""Why Should I Trust You?": Explaining the Predictions of Any Classifier," presented at the Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, California, USA, 2016. [Online]. Available: https://doi.org/10.1145/2939672.2939778.   DOI
11 "Sentiment Analysis of IMDB Movie Reviews." https://www.kaggle.com/lakshmi25npathi/sentiment-analysisof-imdb-movie-reviews/data (accessed.