DOI QR코드

DOI QR Code

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning

딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법

  • Park, Byeongchan (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Jang, Seyoung (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Yoo, Injae (Research Institute, Beyondtech Inc.) ;
  • Lee, Jaechung (Research Institute, Beyondtech Inc.) ;
  • Kim, Seok-Yoon (Dept. of Computer Science and Engineering, Soongsil University) ;
  • Kim, Youngmo (Dept. of Computer Science and Engineering, Soongsil University)
  • Received : 2020.04.08
  • Accepted : 2020.06.24
  • Published : 2020.06.30

Abstract

As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

4차 산업의 주요 기술로 실감형 360도 영상 콘텐츠가 주목받고 있다. 전 세계 실감형 360도 영상 콘텐츠의 시장 규모는 2018년 67억 달러에서 2020년 약 700억 달러까지 증가될 것이라고 전망하고 있다. 하지만 대부분 실감형 360도 영상 콘텐츠가 웹하드, 토렌트 등의 불법 유통망을 통해 유통되고 있어 불법복제로 인한 피해가 증가하고 있다. 이러한 불법 유통을 막기 위하여 기존 2D 영상은 불법저작물 필터링 기술을 사용하고 있다. 그러나 초고화질을 지원하고 두 대 이상의 카메라를 통해 촬영된 영상을 하나의 영상에 담는 실감형 360도 영상 콘텐츠의 특징 때문에 왜곡 영역이 존재하여 기존 2D 영상에 적용된 기술을 그대로 사용하기엔 다소 무리가 있다. 또한, 초고화질에 따른 특징점 데이터량 증가와 이에 따른 처리 속도 문제와 같은 기술적 한계가 존재한다. 본 논문에서는 이러한 문제를 해결하기 위하여 왜곡이 심한 영역을 제외한 객체 식별 영역을 선정하고, 식별 영역에서 딥 러닝 기술을 이용하여 객체를 인식하고 인식된 객체의 정보를 이용하여 특징 벡터를 추출하는 특징점 추출 및 식별 방법을 제안한다. 제안한 방법은 기존에 제안 되었던 스티칭 영역을 이용한 실감형 콘텐츠 특징점 추출방법과 비교하여 성능의 우수성을 보였다.

Keywords

References

  1. H. W. Chun, M. K. Han, and J. H. Jang, "Application trends in virtual reality," 2017 Electronics and Telecommunica tions Trends, 2017.
  2. J. Y. Kim, "Design of 360 degree video and VR contents," Communication Books, 2017.
  3. R. Kijima and K. Yamaguchi, "VR device time Hi-precision time management by synchronizing times between devices and host PC through USB," IEEE Virtual Reality(VR), 2016. DOI: 10.1109/VR.2016.7504723
  4. Y. S. Ho, "MPEG-I standard and 360 degree video content generation," Journal of Electrical Engineering, 2017.
  5. W16824, Text of ISO/IEC DIS 23090-2 Omnidirectional MediA Format (OMAF).
  6. Y. H. Won, J. S. Kim, B. C. Park, Y. M. Kim and S. Y. Kim "An Efficient Feature Point Extraction Method for $360^{\circ}$ Realistic Media Utilizing High Resolution Characteristics," Journal of The Korea Society of Computer and Information, vol.24, no.1, pp.85-92, 2019. DOI: 10.9708/jksci.2019.24.01.085
  7. B. C. Park, J. S. Kim, Y. H. Won, Y. M. Kim and S. Y. Kim "An Efficient Feature Point Extraction and Comparison Method through Distorted Region Correction in 360-degree Realistic Contents," Journal of The Korea Society of Computer and Information, vol.24, no.1, pp.93-100, 2019. DOI: 10.9708/jksci.2019.24.01.093
  8. W. J. Ha and K. A. Sohn, "Image classification approach for Improving CBIR system performance," 2016 KICS Conf. Winter, pp.308-309, 2016. DOI: 10.7840/kics.2016.41.7.816
  9. H. J. Jung and J. S. Yoo, "Feature matching algorithm robust to viewpoint change," J. KICS, vol.40, no.12, pp.2363-2371, 2015. DOI: 10.7840/kics.2015.40.12.2363