DOI QR코드

DOI QR Code

A Feature Point Recognition Ratio Improvement Method for Immersive Contents Using Deep Learning

딥 러닝을 이용한 실감형 콘텐츠 특징점 인식률 향상 방법

  • Park, Byeongchan (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Jang, Seyoung (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Yoo, Injae (Research Institute, Beyondtech Inc.) ;
  • Lee, Jaechung (Research Institute, Beyondtech Inc.) ;
  • Kim, Seok-Yoon (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Kim, Youngmo (Dept. of Computer Science and Engineering, Soongsil University)
  • Received : 2020.03.31
  • Accepted : 2020.06.22
  • Published : 2020.06.30

Abstract

The market size of immersive 360-degree video contents, which are noted as one of the main technology of the fourth industry, increases every year. However, since most of the images are distributed through illegal distribution networks such as Torrent after the DRM gets lifted, the damage caused by illegal copying is also increasing. Although filtering technology is used as a technology to respond to these issues in 2D videos, most of those filtering technology has issues in that it has to overcome the technical limitation such as huge feature-point data volume and the related processing capacity due to ultra high resolution such as 4K UHD or higher in order to apply the existing technology to immersive 360° videos. To solve these problems, this paper proposes a feature-point recognition ratio improvement method for immersive 360-degree videos using deep learning technology.

4차 산업의 주요 기술로 주목받고 있는 실감형 360 동영상 콘텐츠의 시장 규모는 매년 증가하고 있다. 하지만 대부분의 영상이 DRM 해제 후 토렌트 등의 불법 유통망을 통해 유통되고 있어 불법복제로 인한 피해 또한 증가하고 있다. 이러한 이슈에 대응하는 기술로 필터링 기술을 사용하고 있으나 대부분의 불법 저작물 필터링 기술은 2D 영상의 불법 복제 여부를 판단하는 기술에 국한되고 있으며, 이를 실감형 360 동영상에 적용하기 위해서는 4K UHD 이상의 초고화질에 따른 특징 데이터량 증가와 이에 따른 처리 속도 문제와 같은 기술적 한계를 극복해야 하는 과제가 남는다. 본 논문에서는 이러한 문제를 해결하기 위하여 딥 러닝 기술을 이용한 실감형 360도 동영상 내 특징 데이터 인식률 개선 방법을 제안한다.

Keywords

References

  1. H. W. Chun, M. K. Han, and J. H. Jang, "Application trends in virtual reality," 2017 Electronics and Telecommunications Trends, 2017.
  2. S. E. Chen, "Quicktime VR: An image-based approach to virtual environment navigation," Proc. of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp.29-38, 1995. DOI: 10.1145/218380.218395
  3. J. Y. Kim, "Design of 360 degree video and VR contents," Communication Books, 2017. DOI: 10.1109/VR.2016.7504723
  4. R. Kijima and K. Yamaguchi, "VR device time Hi-precision time management by synchronizing times between devices and host PC through USB," IEEE Virtual Reality(VR), Vol.2, pp.201-202, 2016. DOI: 10.1109/VR.2016.7504723
  5. H. J. Jung and J. S. Yoo, "Feature matching algorithm robust to viewpoint change," J. KICS, Vol.40, No.12, pp.2363-2371, 2015. DOI: 10.7840/kics.2015.40.12.2363
  6. W. J. Ha and K. A. Sohn, "Image classification approach for Improving CBIR system performance," 2016 KICS Conf. Winter, pp.308-309, 2016. DOI: 10.7840/kics.2016.41.7.816
  7. Y. S. Ho, "MPEG-I standard and 360 degree video content generation," Journal of Electrical Engineering, 2017.
  8. W16824, Text of ISO/IEC DIS 23090-2 Omnidirectional MediA Format (OMAF).
  9. D. G. Lowe, "Distinctive image features from scale-invariant keypoints," IJCV 2004.
  10. B. C. Park, J. S. Kim, Y. H Won, Y. M. Kim, S. Y. Kim, "An Efficient Feature Point Extraction and Comparison Method through Distorted Region Correction in 360-degree Realistic Contents," Journal of The Korea Society of Computer and Information, Vol.24, No.1, pp.93-100, 2019. DOI: 10.9708/jksci.2019.24.01.093