DOI QR코드

DOI QR Code

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections

다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론

  • Kim, Museong (Graduate School of Business IT, Kookmin University) ;
  • Kim, Namgyu (Graduate School of Business IT, Kookmin University)
  • 김무성 (국민대학교 비즈니스IT전문대학원) ;
  • 김남규 (국민대학교 비즈니스IT전문대학원)
  • Received : 2021.06.09
  • Accepted : 2021.07.19
  • Published : 2021.09.30

Abstract

Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.

최근 딥 러닝 기술의 발전으로 뉴스, 블로그 등 다양한 문서에 포함된 텍스트 분석에 딥 러닝 기술을 활용하는 연구가 활발하게 수행되고 있다. 다양한 텍스트 분석 응용 가운데, 텍스트 분류는 학계와 업계에서 가장 많이 활용되는 대표적인 기술이다. 텍스트 분류의 활용 예로는 정답 레이블이 하나만 존재하는 이진 클래스 분류와 다중 클래스 분류, 그리고 정답 레이블이 여러 개 존재하는 다중 레이블 분류 등이 있다. 특히, 다중 레이블 분류는 여러 개의 정답 레이블이 존재한다는 특성 때문에 일반적인 분류와는 상이한 학습 방법이 요구된다. 또한, 다중 레이블 분류 문제는 레이블과 클래스의 개수가 증가할수록 예측의 난이도가 상승한다는 측면에서 데이터 과학 분야의 난제로 여겨지고 있다. 따라서 이를 해결하기 위해 다수의 레이블을 압축한 후 압축된 레이블을 예측하고, 예측된 압축 레이블을 원래 레이블로 복원하는 레이블 임베딩이 많이 활용되고 있다. 대표적으로 딥 러닝 모델인 오토인코더 기반 레이블 임베딩이 이러한 목적으로 사용되고 있지만, 이러한 기법은 클래스의 수가 무수히 많은 고차원 레이블 공간을 저차원 잠재 레이블 공간으로 압축할 때 많은 정보 손실을 야기한다는 한계가 있다. 이에 본 연구에서는 오토인코더의 인코더와 디코더 각각에 스킵 연결을 추가하여, 고차원 레이블 공간의 압축 과정에서 정보 손실을 최소화할 수 있는 레이블 임베딩 방법을 제안한다. 또한 학술연구정보서비스인 'RISS'에서 수집한 학술논문 4,675건에 대해 각 논문의 초록으로부터 해당 논문의 다중 키워드를 예측하는 실험을 수행한 결과, 제안 방법론이 기존의 일반 오토인코더 기반 레이블 임베딩 기법에 비해 정확도, 정밀도, 재현율, 그리고 F1 점수 등 모든 측면에서 우수한 성능을 나타냄을 확인하였다.

Keywords

References

  1. Ashish, K., P. Jain, and R. Viswanathan, "Multilabel Classification using Bayesian Compressed Sensing," Advances in Neural Information Processing Systems 25, 2012.
  2. Ashish, V., N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin, "Attention Is All You Need," arXiv:1706.03762, (2017).
  3. Bingyu, W., L. Chen, W. Sun, K. Qin, K. Li, and H. Zhou, "Ranking-Based Autoencoder for Extreme Multi-label Classification," arXiv: 1904.05937, (2019).
  4. Chih-Kuan, Y., W-C. Wu, W-J. Ko, and Y-C. F. Wang, "Learning Deep Latent Space for Multi-Label Classification," Thirty-First AAAI Conference on Artificial Intelligence, Vol.31, No.1(2017).
  5. Denis, L., A. Aussema, and M. Gasse, "On the use of binary stochastic autoencoders for multilabel classification under the zero-one loss," Procedia Computer Science, Vol.144, (2018), 71~80. https://doi.org/10.1016/j.procs.2018.10.506
  6. Farbound, T. and H-T. Lin, "Multilabel Classification with Principal Label Space Transformation," Neural Computation Vol.24, No.9(2012), 2508~2542. https://doi.org/10.1162/NECO_a_00320
  7. Ganda, D. and R. Buch, "A Survey on Multi Label Classification," Recent Trends in Programming Languages, Vol.5, No.1(2018), 19~23.
  8. Jo, I. S., Y. H. Kang, D. B. Choi, and Y. B. Park, "Clustering Performance Analysis of Autoencoder with Skip Connection," KIPS Transactions on Software and Data Engineering, Vol.9, No.12 (2020), 403~410. https://doi.org/10.3745/KTSDE.2020.9.12.403
  9. Jorg, W., A. Tyukin, and S. Kramer, "A Nonlinear Label Compression and Transformation Method for Multi-label Classification Using Autoencoders," Advances in Knowledge Discovery and Data Mining, (2016), 328~340.
  10. Jorg, W., B. Pfahringer, and S. Kramer, "Multi-label classification using boolean matrix decomposition," Proceedings of the 27th Annual ACM Symposium on Applied Computing, (2012), 179~186
  11. Kaiming, H., X. Zhang, S. Ren, and J. Sun, "Deep Residual Learning for Image Recognition," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, (2016), 770~778.
  12. Pascal, V., H. Larochelle, Y. Bengio, and P. A. Manzagol, "Extracting and composing robust features with denoising autoencoders," Proceedings of the 25th international conference on Machine learning, (2008), 1096~1103.
  13. Pascal, V., H. Larochelle, I. Lajoie, Y. Bengio, and P. A. Manzagol, "Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion," Journal of Machine Learning Research, Vol.11, (2010), 3371~3408.
  14. Pierre, B. and K. Hornik, "Neural Networks and Principal Component Analysis: Learning from Examples Without Local Minima," Neural Networks, Vol.2, (1989), 53~58. https://doi.org/10.1016/0893-6080(89)90014-2
  15. Tomas M., K. Chen, G. Corrado, and J. Dean, "Efficient Estimation of Word Representations in Vector Space," arXiv:1301.3781, (2013).
  16. Yoshua, B., P. Lamblin, D. Popovici, and H. Larochelle, "Greedy Layer-Wise Training of Deep Networks," Advances in Neural Information Processing Systems 19, 2007.
  17. Zichao Y., D. Yang, C. Dyer, X. He, A. Smola, and E. Hovy, "Hierarchical Attention Networks for Document Classification," Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, (2016), 1480~1489.