DOI QR코드

DOI QR Code

YCbCr 컬러모델 기반의 키프레임 추출을 통한 티저 영상 제작 방법에 대한 연구

A Research on the Teaser Video Production Method by Keyframe Extraction Based on YCbCr Color Model

  • 이서영 (한국기술교육대학교 컴퓨터공학과) ;
  • 박효경 (한국기술교육대학교 컴퓨터공학과) ;
  • 용성중 (한국기술교육대학교 컴퓨터공학과) ;
  • 유연휘 (한국기술교육대학교 컴퓨터공학과) ;
  • 문일영 (한국기술교육대학교 컴퓨터공학과)
  • Lee, Seo-young (Department of Computer Science and Engineering, Korea University of Technology and Education) ;
  • Park, Hyo-Gyeong (Department of Computer Science and Engineering, Korea University of Technology and Education) ;
  • Young, Sung-Jung (Department of Computer Science and Engineering, Korea University of Technology and Education) ;
  • You, Yeon-Hwi (Department of Computer Science and Engineering, Korea University of Technology and Education) ;
  • Moon, Il-Young (Department of Computer Science and Engineering, Korea University of Technology and Education)
  • 투고 : 2022.08.03
  • 심사 : 2022.08.18
  • 발행 : 2022.08.31

초록

온라인 미디어 플랫폼의 발전 및 코로나19 사태로 디지털 영상 콘텐츠의 양산과 소비가 급증하고 있다. 이용자들은 디지털 영상 콘텐츠를 선택하기 위해 썸네일, 티저 영상 등을 통하여 짧은 시간에 콘텐츠를 파악하고 본인에게 맞는 디지털 영상 콘텐츠를 선정하여 시청하고 있다. 세계 곳곳에서 생산되는 모든 디지털 영상 콘텐츠를 일일이 확인하고, 이용자가 선택할 수 있게 티저 영상을 수작업으로 편집하는 것은 매우 불편한 작업이다. 본 연구에서는 티저 영상을 자동으로 생성하기 위해 YCbCr 컬러 모델을 기반으로 키프레임을 추출하고, 클러스터링 기법을 통해 추출된 키프레임을 최적화한다. 마지막으로 최종 추출된 키프레임을 연결하여 사용자들의 디지털 영상 콘텐츠 확인을 도와 주기 위한 티저 영상을 제작하는 방법을 제시한다.

Due to the development of online media platforms and the COVID-19 incident, the mass production and consumption of digital video content are rapidly increasing. In order to select digital video content, users grasp it in a short time through thumbnails and teaser videos, and select and watch digital video content that suits them. It is very inconvenient to check all digital video contents produced around the world one by one and manually edit teaser videos for users to choose from. In this paper, keyframes are extracted based on YCbCr color models to automatically generate teaser videos, and keyframes extracted through clustering are optimized. Finally, we present a method of producing a teaser video to help users check digital video content by connecting the finally extracted keyframes.

키워드

과제정보

이 논문은 2022년도 정부(교육부)의 재원으로 한국연구재단의 지원을 받아 수행된 기초연구사업 (No.2021R1I1A3057800) 과제 지원에 의하여 연구되었음.

참고문헌

  1. NIPA, ICT Global Market Analysis of Digital contents, NIPA, Jul, 2022. [Internet]. Available: https://www.globalict.kr/country/country_list.do?menuCode=030100&knwldNo=142117#none.
  2. S. Kolkur, D. Kalbande, P. Shimpi, C. Bapat, and J. Jatakia, "Human skin detection using RGB, HSV, and YCbCr color models," in Proceeding of the International Conference on Communication and Signal Processing 2016 (ICCASP 2016), vol. 137, no. 1, pp. 324-332, August 2017.
  3. Y. Luo, G. Cui, and D. Li, "An improved gesture segmentation method for gesture recognition based on CNN and YCbCr," Journal of Electrical and Computer Engineering, vol. 2021, pp. 1-9, June 2021.
  4. Y. Tan, J. Qin, X. Xiang, W. Ma, W. Pan, and N. N. Xiong, "A robust watermarking scheme in YCbCr color space based on channel coding," IEEE Access, vol. 7, pp. 25026-25036, January 2019. https://doi.org/10.1109/ACCESS.2019.2896304
  5. G. Liang, Y. Lv, S. Li, S. Zhang, and Y. Zhang, "Video summarization with a convolutional attentive adversarial network," Pattern Recognition, vol. 131, pp. 1-10, June 2022.
  6. M. Bolanos, R. Mestre, E. Talavera, X, Giro-i-Nieto, and P. Radeva, "Visual summary of egocentric photostreams by representative keyframes," arXiv:1505.01130, May 2015.
  7. E. Apostolidis, E. Admantidou, A. I. Metsai, V. Mezaris, and I. Patras, "Video summarization using deep neural networks: a survery," arXiv:2101.06072v2, September 2021.
  8. S. H. Zhong, J. Lin, J. Lu, A. Fares, and T. Ren, "Deep semantic and attentive network for unsupervised video summarization," ACM Transactions on Multimedia Computing, Communications and Applications, vol. 18, no. 2, pp. 1-21, Februay 2022.
  9. J. Fajtl, H. S. Sokeh, V. Argyriou, D. Monekosso, and P. Remagnino, "Summarizing videos with attention," arXiv:1812.01969v2, December 2018.
  10. H. Fang, P. Xiong, L. Xu, and Y. Chen, "CLIP2Video : Mastering Video-Text Retrieval via Image CLIP," arXiv:2106.11097v1. June 2021.
  11. A. Shimono, Y. Kakui, and T. Yamasaki, "Automatic Youtube-Thumbnail Generation and Its Evaluation", in Proceeding of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia, Dublin:Ireland, June 2020.
  12. Q. Huang, Y. Xiong, A. Rao, J. Wang, and D. Lin, "MovieNet : A Holistic Dataset for Movie Understanding," in Proceeding of the European Conference on Computer Vision 2020, arXiv:2007.10937, Jul. 2020 [Online] https://github.com/movienet.
  13. Y. Zhuang, Y. Rui, T. S. Huang, and S. Mehrotra, "Adaptive key frame extraction using unsupervised clustering," in Proceedings 1998 International Conference on Image Processing, ICIP98 (Cat. No.98CB36269), vo1. 2, pp. 866-870, 1998.
  14. A. K. Jain and R. C. Dubes, "Algorithms for clustering data," New Jersey, Prentice Hall, 1988.
  15. K. M. Sawan, 2019. Key-Frames-Extraction-from-Video: GitHub repository; [accessed 2022 August 4]. https://github.com/sawankumar94/Key-Frames-Extraction-from-Video.
  16. Z. Cernekova, C. Kotropoulos, and I. Pitas, "Video shot segmentation using singular value decomposition," in ICME 03: Proceedings of the 2003 International Conference on Multimedia and Expo, vol. 1, pp. 301-304, 2003.