Watermark Extraction Method of Omnidirectional Images Using CNN

CNN을 이용한 전방위 영상의 워터마크 추출 방법

  • Moon, Won-Jun (Department of Electronic Materials Engineering, Kwangwoon University) ;
  • Seo, Young-Ho (Department of Electronic Materials Engineering, Kwangwoon University) ;
  • Kim, Dong-Wook (Department of Electronic Materials Engineering, Kwangwoon University)
  • Received : 2019.12.19
  • Accepted : 2020.02.04
  • Published : 2020.03.30


In this paper, we propose a watermark extraction method of omnidirectional images using CNN (Convolutional Neural Network) to improve the extracted watermark accuracy of the previous deterministic method that based on algorithm. This CNN consists of a restoration process of extracting watermarks by correcting distortion during omnidirectional image generation and/or malicious attacks, and a classification process of classifying which watermarks are extracted watermarks. Experiments with various attacks confirm that the extracted watermarks are more accurate than the previous methods.

본 논문에서는 전방위 영상에 대해 알고리즘 기반으로 워터마크를 추출했던 기존 방법의 정확도를 향상시키기 위해 합성곱 신경망(Convolutional Neural Network, CNN)을 이용한 워터마크 추출 방법을 제안한다. 이 CNN은 전방위 영상의 생성과정에서 발생하는 변형과 악의적인 공격에 대한 보정을 수행하여 워터마크를 추출하는 복원과정과, 추출한 워터마크가 어떤 워터마크인지를 분류하는 분류과정으로 구성된다. 이에 대해 다양한 공격에 대한 실험을 통해 기존 방법보다 추출되는 워터마크의 정확도가 더 높음을 확인한다.



  1. Youtube Virtual-Reality Channel.. Available :
  2. MPEG-I, Information Technology-Coded Representation of Immersive Media (MPEG-I)-Part 2: Omnidirectional Media Format, document N17563, SO/IEC FDIS 23090-2:201x, ISO/IEC JTC1 SC29/WG11 MPEG, 122nd Meeting, San Diego, USA, Apr., 2018
  3. I. S. Kang, W. J. Moon, Y. H. Seo, and D. W. Kim, "Distortion in VR 360 degree panoramic image," Summer conference on the KIBME, Jeju, Korea, pp.194-196, June, 2017
  4. W. J. Moon, Y. H. Seo, and D. W. Kim, "SIFT Feature Based Digital Watermarking Method for VR Image," Journal of Broadcast Engineering, Vol.24, No. 6, Nov., 2019
  5. D. G. Lowe, "Distinctive Image Features from Scale-Invariant Keypoints," International Journal of Computer Vision, Vol. 60, No. 2, pp. 91-110, Jan., 2004