Browse > Article
http://dx.doi.org/10.6109/jkiice.2021.25.9.1275

Automated Fact Checking Model Using Efficient Transfomer  

Yun, Hee Seung (Department of Computer Engineering, Chung-Ang University)
Jung, Jason J. (Department of Computer Engineering, Chung-Ang University)
Abstract
Nowadays, fake news from newspapers and social media is a serious issue in news credibility. Some of machine learning methods (such as LSTM, logistic regression, and Transformer) has been applied for fact checking. In this paper, we present Transformer-based fact checking model which improves computational efficiency. Locality Sensitive Hashing (LSH) is employed to efficiently compute attention value so that it can reduce the computation time. With LSH, model can group semantically similar words, and compute attention value within the group. The performance of proposed model is 75% for accuracy, 42.9% and 75% for Fl micro score and F1 macro score, respectively.
Keywords
Automated fact checking; Locality sensitive hashing; Natural language processing; Transformer;
Citations & Related Records
연도 인용수 순위
  • Reference
1 N. Kotonya and F. Toni, "Explainable Automated FactChecking: A Survey," Proceedings of the 28th International Conference on Computational Linguistics, Barcelona (online), pp. 5430-5443, 2020.
2 L. Wu, Y. Rao, Y. Zhao, H. Liang, and A. Nazir, "DTCA: Decision Tree-based Co-Attention Networks for Explainable Claim Verification," in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, online, pp. 1024-1035, 2020.
3 A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention Is All You Need," in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, pp. 6000-6010, 2017.
4 J. Devlin, M. W. Chang, K. Lee, and K. Toutanova, "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding," Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, Minneapolis: MN, pp. 4171-4186, 2019.
5 N. Kitaev, L. Kaiser, and A. Levskaya, "Reformer: The Efficient Transformer," in International Conference on Learning Representations, online, 2020.
6 E. Kochkina, M. Liakata, and A. Zubiaga. "All-in-one: Multi-task Learning for Rumour Verification," Proceedings of the 27th International Conference on Computational Linguistics, Santa Fe: NM, pp. 3402-3413, 2018.
7 Wikipedia, Locality Sensitive Hashing [Internet]. Available: https://en.wikipedia.org/wiki/Locality-sensitive_hashing.