DOI QR코드

DOI QR Code

Explanable Artificial Intelligence Study based on Blockchain Using Point Cloud

포인트 클라우드를 이용한 블록체인 기반 설명 가능한 인공지능 연구

  • 홍성혁 (백석대학교 스마트IT공학부, 핀테크 전공)
  • Received : 2021.07.06
  • Accepted : 2021.08.20
  • Published : 2021.08.28

Abstract

Although the technology for prediction or analysis using artificial intelligence is constantly developing, a black-box problem does not interpret the decision-making process. Therefore, the decision process of the AI model can not be interpreted from the user's point of view, which leads to unreliable results. We investigated the problems of artificial intelligence and explainable artificial intelligence using Blockchain to solve them. Data from the decision-making process of artificial intelligence models, which can be explained with Blockchain, are stored in Blockchain with time stamps, among other things. Blockchain provides anti-counterfeiting of the stored data, and due to the nature of Blockchain, it allows free access to data such as decision processes stored in blocks. The difficulty of creating explainable artificial intelligence models is a large part of the complexity of existing models. Therefore, using the point cloud to increase the efficiency of 3D data processing and the processing procedures will shorten the decision-making process to facilitate an explainable artificial intelligence model. To solve the oracle problem, which may lead to data falsification or corruption when storing data in the Blockchain, a blockchain artificial intelligence problem was solved by proposing a blockchain-based explainable artificial intelligence model that passes through an intermediary in the storage process.

인공지능을 이용하여 예측이나 분석하는 기술은 지속적으로 발전하고 있지만, 의사결정 과정을 명확히 해석하지 못하는 블랙박스 문제가 존재한다. 따라서 인공지능 모델의 의사결정 과정에서 사용자의 입장에서 해석이 불가능하여 결과를 신뢰할 수 없는 문제가 발생한다. 본 연구에서는 인공지능의 문제점과 이를 해결하기 위한 블록체인을 활용한 설명 가능한 인공지능에 대해 연구를 진행하였다. 블록체인을 이용해서 설명 가능한 인공지능 모델의 의사결정 과정에서의 데이터를 타임스탬프 등을 이용하여 부분별로 블록체인에 저장한다. 블록체인을 이용하여 저장된 데이터의 위변조 방지를 제공하고 블록체인의 특성상 사용자는 블록에 저장된 의사결정 과정등의 데이터를 자유롭게 접근할 수 있다. 설명 가능한 인공지능 모델의 구축이 힘든 것은 기존 모델의 복잡성이 큰 부분을 차지한다. 따라서 포인트 클라우드를 활용해서 3차원 데이터 처리와 가공과정의 효율성을 높여서 의사결정 과정을 단축해 설명 가능한 인공지능 모델의 구축을 원활하게 한다. 블록체인에 데이터 저장과정에서 데이터 위변조가 발생할 수 있는 오라클 문제를 해결하기 위해 저장과정에 중간자를 거치는 블록체인 기반의 설명 가능한 인공지능 모델을 제안하여 인공지능의 블랙박스 문제를 해결하였다.

Keywords

Acknowledgement

This research was supported by 2021 Baekseok University Research Fund.

References

  1. M. Ridley. (2019). Explainable Artificial Intelligence. Ethics of Artificial Intelligence, (299), 28-46. DOI : 10.29242/rli.299.3
  2. A. Ignatiev. (2020). Towards Trustable Explainable AI. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. DOI : 10.24963/ijcai.2020/726
  3. S. Mahamood. (2019). Explainable Artificial Intelligence and its potential within Industry. Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019). DOI : 10.18653/v1/w19-8401
  4. A. Holzinger. (2018). From Machine Learning to Explainable AI. 2018 World Symposium on Digital Intelligence for Systems and Machines (DISA). (pp. 55-66). IEEE. DOI : 10.1109/disa.2018.8490530
  5. B. S. Miguel, A. Naseer & H. Inakoshi. (2020). Putting Accountability of AI Systems into Practice. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. DOI : 10.24963/ijcai.2020/768
  6. E. Daglarli, (2020). Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models. Advances and Applications in Deep Learning. 79. DOI : 10.5772/intechopen.92172
  7. Jo, T. (2020). Decision Tree. Machine Learning Foundations, 141-165. DOI : 10.1007/978-3-030-65900-4_73.
  8. F. Mahan, M. Mohammadzad, S. M. Rozekhani & W. Pedrycz. (2021). Chi-MFlexDT: Chi-square-based multi flexible fuzzy decision tree for data stream classification. Applied Soft Computing, 105, 107301. DOI : 10.1016/j.asoc.2021.107301
  9. L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M., Specter & L. Kagal. (2018). Explaining Explanations: An Overview of Interpretability of Machine Learning. 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). DOI : 10.1109/dsaa.2018.00018
  10. M. Langer et al. (2021). What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research. Artificial Intelligence, 296, 103473. DOI : 10.1016/j.artint.2021.103473
  11. M. Nassar, K. Salah, M. H. ur Rehman & D. Svetinovic. (2020). Blockchain for explainable and trustworthy artificial intelligence. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 10(1), e1340. DOI : 10.1002/widm.1340
  12. Z. Zhang, Y. Dai & J. Sun. (2020). Deep learning based point cloud registration: an overview. Virtual Reality & Intelligent Hardware, 2(3), 222-246. DOI : 10.1016/j.vrih.2020.05.002
  13. H. J. Kim, K. H. Han & S. S. Shin. (2021). Hash-based SSDP for IoT Device Security. Journal of the Korea Convergence Society, 12(5), 9-16. https://doi.org/10.15207/JKCS.2021.12.5.009
  14. G. Caldarelli & J. Ellu. (2021). The blockchain ORACLE problem in Decentralized finance - A Multivocal Approach. DOI : 10.20944/preprints202107.0231.v1
  15. G. Caldarelli. (2020). Understanding the Blockchain Oracle Problem: A Call for Action. Information, 11(11), 509. DOI : 10.3390/info11110509