DOI QR코드

DOI QR Code

Clustering Performance Analysis of Autoencoder with Skip Connection

스킵연결이 적용된 오토인코더 모델의 클러스터링 성능 분석

  • Received : 2020.04.22
  • Accepted : 2020.10.02
  • Published : 2020.12.31

Abstract

In addition to the research on noise removal and super-resolution using the data restoration (Output result) function of Autoencoder, research on the performance improvement of clustering using the dimension reduction function of autoencoder are actively being conducted. The clustering function and data restoration function using Autoencoder have common points that both improve performance through the same learning. Based on these characteristics, this study conducted an experiment to see if the autoencoder model designed to have excellent data recovery performance is superior in clustering performance. Skip connection technique was used to design autoencoder with excellent data recovery performance. The output result performance and clustering performance of both autoencoder model with Skip connection and model without Skip connection were shown as graph and visual extract. The output result performance was increased, but the clustering performance was decreased. This result indicates that the neural network models such as autoencoders are not sure that each layer has learned the characteristics of the data well if the output result is good. Lastly, the performance degradation of clustering was compensated by using both latent code and skip connection. This study is a prior study to solve the Hanja Unicode problem by clustering.

오토인코더의 데이터 복원(Output result) 기능을 이용한 노이즈 제거 및 초해상도와 같은 연구가 진행되는 가운데 오토인코더의 차원 축소 기능을 이용한 클러스터링의 성능 향상에 대한 연구도 활발히 진행되고 있다. 오토인코더를 이용한 클러스터링 기능과 데이터 복원 기능은 모두 동일한 학습을 통해 성능을 향상시킨다는 공통점이 있다. 본 논문은 이런 특징을 토대로, 데이터 복원 성능이 뛰어나도록 설계된 오토인코더 모델이 클러스터링 성능 또한 뛰어난지 알아보기 위한 실험을 진행했다. 데이터 복원 성능이 뛰어난 오토인코더를 설계하기 위해서 스킵연결(Skip connection) 기법을 사용했다. 스킵연결 기법은 기울기 소실(Vanishing gradient)현상을 해소해주고 모델의 학습 효율을 높인다는 장점을 가지고 있을 뿐만 아니라, 데이터 복원 시 손실된 정보를 보완해 줌으로써 데이터 복원 성능을 높이는 효과도 가지고 있다. 스킵연결이 적용된 오토인코더 모델과 적용되지 않은 모델의 데이터 복원 성능과 클러스터링 성능을 그래프와 시각적 추출물을 통해 결과를 비교해 보니, 데이터 복원 성능은 올랐지만 클러스터링 성능은 떨어지는 결과를 확인했다. 이 결과는 오토인코더와 같은 신경망 모델이 출력된 결과 성능이 좋다고 해서 각 레이어들이 데이터의 특징을 모두 잘 학습했다고 확신할 수 없음을 알려준다. 마지막으로 클러스터링의 성능을 좌우하는 잠재변수(latent code)와 스킵연결의 관계를 분석하여 실험 결과의 원인에 대해 파악하였고, 파악한 결과를 통해 잠재변수와 스킵연결의 특징정보를 이용해 클러스터링의 성능저하 현상을 보완할 수 있다는 사실을 보였다. 이 연구는 한자 유니코드 문제를 클러스터링 기법을 이용해 해결하고자 클러스터링 성능 향상을 위한 선행연구이다.

Keywords

References

  1. C. Ding and X. He. "K-means clustering and principal component analysis." ICML, 2004.
  2. R. Raskin and H. Terry, "A principal components analysis of the Narcissistic Personality Inventory and further evidence of its construct validlity," Journal of Personality and Social Psychology, Vol.54, pp.890-902, 1988. https://doi.org/10.1037//0022-3514.54.5.890
  3. van der Maaten, L. and Hinton, G. "Visualizing data using t-sne," Journal of Machine Learning Research, Vol.9, 2008.
  4. G. E. Hinton and S. T. Roweis. "Stochastic Neighbor Embedding," In Advances in Neural Information Processing Systems, Vol.15, pp.833-840, Cambridge, MA, USA, 2002. The MIT Press.
  5. X., Peng, J. Feng, J. Lu, W. Y. Yau, and Z. Yi, "Cascade subspace clustering," In: AAAI Conference on Artificial Intelligence (AAAI). pp.2478-2484, 2017.
  6. X. Peng, S. Xiao, J. Feng, W.Y. Yau, and Z. Yi, "Deep subspace clustering with sparsity prior," In: International Joint Conference on Artificial Intelligence (IJCAI), 2016.
  7. J. Xie, R. Girshick, and A. Farhadi, "Unsupervised deep embedding for clustering analysis," In: International Conference on Machine Learning (ICML) (2016).
  8. H. Liu, R. Xiong, J. Zhang, and W. Gao, "Image denoising viaadaptive soft-thresholding based on non-local samples," in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp.484-492.
  9. F. Chen, L. Zhang, and H. Yu, "External patch prior guided internal clustering for image denoising," in 2015 IEEE International Conference on Computer Vision (ICCV), pp.603-611, 2015.
  10. R. Timofte, V. D. Smet, and L. J. V. Gool, "A+: adjusted anchored neighborhood regression for fast superresolution," in Proc. Asian Conf. Comp. Vis., pp.111-126, 2014.
  11. J. Yang, Z. Lin, and S. Cohen, "Fast image super-resolution based on in-place example regression," in 2013 IEEE Conference on Computer Vision and Pattern Recognition, pp.1059-1066, 2013.
  12. M. A. Kramer, "Autoassociative neural networks," Computers & Chemical Engineering, Vol.16, No.4, pp.313-328, 1992. https://doi.org/10.1016/0098-1354(92)80051-A
  13. Xiao-Jiao Mao, Chunhua Shen, and Yu-Bin Yang, "Image restoration using convolutional autoencoders with symmetric skip connections," CoRR, abs/1606.08921, 2016a.
  14. J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation," In Computer Vision and Pattern Recognition, 2015.
  15. K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," In Computer Vision and Pattern Recognition, 2016
  16. Jeonghyeon Lee, "Problems with Chinese Ideographs Search in Unicode and Solutions to Them," Informatization Policy, Vol.19, No.3, pp.50-63, 2012.
  17. P. Vincent, H. Larochelle, Y. Bengio, P. A. Manzagol, "Extracting and composing robust features with denoising autoencoders," In: International Conference on Machine learning, pp.1096-1103, 2008.
  18. K. He, G. Gkioxari, P. Dollar, and R. Girshick, "Mask r-cnn," arXiv:1703.06870, 2017.
  19. G. Montufar, R. Pascanu, K. Cho, and Y. Bengio, "On the number of linear regions of deep neural networks," In Neural Information Processing Systems, 27, 2014.
  20. H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: "a novel image dataset for benchmarking machine learning algorithms," arXiv preprint arXiv:1708.07747, 2017.
  21. P. Grother, "NIST special database 19 handprinted forms and characters database," National Institute of Standards and Technology, 1995.
  22. Hochreiter, S. "The vanishing gradient problem during learning recurrent neural nets and problem solutions," International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol.6, No.2, pp.107-116, 1998. https://doi.org/10.1142/S0218488598000094
  23. X. Huo, X. Liu, E. Zheand and J. Yin, "Deep Clustering with Convolutional auto-encoders, International Conference on Neural Information Processing, pp.373-382, 2017.
  24. P. Ji, T. Zhang, H. Li, M. Salzmann, and I. Reid, "Deep subspace clustering networks," in Neural Information Processing Systems, QC, Canada, pp.24-33, Dec. 2017.
  25. S. Yang, W. Zhu, and Y. Zhu, "Residual encoder-decoder network for deep subspace clustering," arXiv:1910.05569, 2019, [online] Available: https://arxiv.org/abs/1910.05569