• Title/Summary/Keyword: 360 video

Search Result 173, Processing Time 0.027 seconds

360 VR-based Sokcho Introduction Video Production (360 VR기반 속초 소개 영상 제작)

  • Lee, Jun-yeong;Im, So-Yeon;Park, Cheol-woo;Lee, Young-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.493-495
    • /
    • 2022
  • This video is based on the newly emerged next-generation Media 360 VR. With the development of technology, digital content has developed and the opening of the COVID-19 pandemic untact era, people have found content that they can enjoy without going directly. 360 VR is a next-generation media that allows users to enjoy content in a three-dimensional manner as if they went to the site without having to go to the site. Using this, we would like to study the effective creation of local promotional videos.

  • PDF

Wrap-around Motion Vector Prediction for 360 Video Streams in Versatile Video Coding (VVC에서 360 비디오를 위한 랩-어라운드 움직임 벡터 예측 방법)

  • Lee, Minhun;Lee, Jongseok;Park, Juntaek;Lim, Woong;Bang, Gun;Sim, Dong Gyu;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.313-324
    • /
    • 2020
  • In this paper, we propose a motion vector prediction method that increases the coding efficiency at the boundary of an image by utilizing the 360 video characteristic. In the current VVC method, the location of a neighbor block is excluded from the candidate list for inter prediction in case that it is out of boundary. This can reduce coding efficiency as well as subject quality. To solve this problem, we construct new candidates adding the location of the neighbor block at the boundary of the picture from already decoded information based on the projection method for 360 video coding. To evaluate the performance of the proposed method, we compare with VTM6.0 and 360Lib9.1 under Random Access condition of JVET-360 CTC. As a result, the coding performance shows a BD-rate reduction of 0.02% on average in luma component and 0.05%, 0.06% on average in chroma components respectively, without additional computational complexity. The coding performance at the picture boundary shows a BD-rate reduction of 0.29% on average in luma component and 0.45%, 0.43% on average in chroma components, respectively. Furthermore, we perform subjective quality test with the DSCQS method and obtain MOS values. The MOS value is improved by 0.03 value, and we calculate BD-MOS using MOS value and bit-rate. As a result, the proposed method improved performance by up to 8.78% and 5.18% on average.

A Study on the Arrangement of 360 VR Camera for Music Performance Video (음악 공연 영상의 360 VR 카메라 배치에 관한 연구)

  • Nam, SangHun;Kang, DongHyun;Kwon, JoungHuem
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.518-527
    • /
    • 2020
  • 360 VR technology is used not only in movies, but also in performing arts such as music, theater, dance, and so on due to the characteristics of immersion and presence. The technology allows the audience can be perceived a feel of participation in a story. This study is conducted an analysis of the techniques of 360 video shooting in order to find the answers of the following questionaries: how to make viewers enhance to a better understanding of a space, how to make the viewer feel comfortable ceding control of the experience, how to generate greater empathy with a 360 video. Thirty cases were analysed 360-degree videos of live performances performed on stage among 360-degree images of music performance content shared on Youtube from 2015 to 2020. The result shows that live performances are performed with the audience, so the stage shape and the layout of the audience seats are preferred to the characteristics of the performance. It was also shown that directing using a 360 VR camera was also greatly affected by the stage and audience placement. The stage is manly classified into three types, and the camera layout and characteristics mainly used are organized according to the number of 360 VR cameras, whether fixed or mobile cameras are used.

Implementing Multiple-tile Extractor for Viewport-dependent 360 Video Streaming (사용자 시점 기반 360 도 영상 스트리밍을 위한 다중 타일 추출기 구현)

  • Jeong, Jong-Beom;Lee, Soonbin;Kim, Inae;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.53-56
    • /
    • 2020
  • 몰입감 있는 가상 현실 영상을 제공하기 위한 360 도 영상 부호화 및 전송 기술이 활발히 연구되고 있으나, 현재 가상현실 장비가 사용가능한 연산 능력 및 대역폭으로는 몰입감 있는 영상을 전송 및 재생하기에 한계가 있다. 따라서 본 논문은 고화질 360 도 사용자 시점 영상 제공을 위해 사용자 시점 타일을 추출하는 움직임 제한 타일 셋 기반 타일 추출기를 구현한다. 기존의 high-efficiency video coding (HEVC) 에서 구현되었던 타일 추출기와 달리 제안하는 추출기는 360 도 영상에 대한 비트스트림에서 여러 개의 타일을 추출한다. 이후 추출된 타일들은 전체 360 도 영상에 대한 저화질 비트스트림과 동시 전송되어 예상치 못한 사용자 시점 변경에 대응한다.

  • PDF

A Reference Frame Extraction Method for 360-degree Video Identification by Measuring RGB Displacement Values (RGB 변위값 측정을 통한 360도 영상 식별 기준 프레임 추출 방법)

  • Yoo, Injae;Lee, Jeacheng;Jang, Seyoung;Park, Byeongchan;Kim, Youngmo;Kim, Seok-Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.419-420
    • /
    • 2020
  • 본 논문에서는 불법복제 영상 판단을 위한 RGB 변위값 측정을 통한 360도 영상 식별 기준 키 프레임 선정 방법을 제안한다. 방송 프로그램이나 영화 등과 같은 콘텐츠는 인터넷들을 통하여 국내뿐만 아니라 해외로도 대량 불법 유통됨으로써 국가적으로 큰 손실이 발생하고 있다. 본 논문에서는 이러한 불법복제 여부를 빠른 속도로 판단하기 위한 방법으로 360도 영상에서 추출된 각각의 프레임에서 RGB 변위값을 측정하여 동일한 장면으로 인식되는 프레임을 하나로 묶어 해당 장면의 키 프레임으로 선정한다. 본 논문에서 제안한 방법은 불법복제 영상의 판단 시간을 단축시키고 판단 정확도를 향상시킬 수 있는 효과가 있다.

  • PDF

A Research on the Uses of and Satisfactions from 360° 3D Video Using VR Devices (VR 디바이스를 이용한 360° 3D 동영상 이용과 충족 연구 : 시청자와 시청예정자의 차이를 중심으로)

  • Moon, Yoon-Taek;Kim, Donna
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.205-214
    • /
    • 2018
  • As the paradigm of the Fourth Industrial Revolution is expanding, VR devices and their contents industry are drawing more and more attention as the digital devices of the next generation. Of the realms of VR contents, $360^{\circ}$ 3D videos are receiving the most attention in the field of media, and they are being utilized by Google as educational contents. As such, this research analyzes actual condition of use and motivation of using corresponding contents through survey of $360^{\circ}$ 3D video users. Results show that the most utilized platform is Youtube, and the genres which the respondents have used or are willing to use turned out to be Games and Movies.

A study on the Influence of VR360° Degree Panoramic Video on Theory of Modern Films (VR 360° 기술이 현대영화 이론에 끼친 영향에 관한 연구)

  • HUA, LU HUI;Kim, Hae Yoon
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2016.05a
    • /
    • pp.353-354
    • /
    • 2016
  • 영화관 발전 이유는(보통 영화,3D MIX,4D)관중 시청각 효과가 위해서 발전한다. 이제 까지 최고한 시청각 효과 받을 수 있는 기술을 $VR360^{\circ}$ Degree panoramic video촬영 기술 밖에 없다. 하지만 이 기술을 전통 영화 촬영 기술 보다 완전히 다르다. 이 기술 영화 방면 운용하면 원래 영화적 이론 크게 충격 받다.

  • PDF

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.

An Advanced QER Selection Algorithm Based on MMT Protocol for 360-Degree VR Video Streaming (MMT 프로토콜 기반의 360도 VR 비디오 전송을 위한 개선된 QER 선택 알고리듬)

  • Kim, A-young;An, Eun-bin;Seo, Kwang-deok
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.948-955
    • /
    • 2019
  • As interests in 360-degree VR (Virtual Reality) video services enormously grow, compression and streaming technologies for VR video data have been rapidly developed. Quality Emphasized Region (QER) based streaming scheme has been developed as a kind of viewport-adaptive 360-degree video streaming technology for maintaining immersive experience and reducing bandwidth waste. For selecting a QER corresponding to the user's gaze coordinate, QER-based streaming scheme requires the calculation of Quality Emphasis Center (QEC) distance and signaling message delivery for requesting QER switching. QEC distance calculations require high computational complexity because of repeated calculations as many times as the number of QERs. Furthermore, the signaling message interval results in a trade-off relationship between efficient bandwidth usage and flexible QER switching. In this paper, we propose an improved QER selection algorithm based on MMT protocol to solve this problem. The proposed algorithm could achieve computational complexity reduction by using preprocessed QER_ID_MAP. Also, flexible QER switching could be achieved, as well as efficient bandwidth utilization by an adaptive adjustment of the signaling interval.

Object Recognition in 360° Streaming Video (360° 스트리밍 영상에서의 객체 인식 연구)

  • Yun, Jeongrok;Chun, Sungkuk;Kim, Hoemin;Kim, Un Yong
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.07a
    • /
    • pp.317-318
    • /
    • 2019
  • 가상/증강현실로 대표되는 공간정보 기반 실감형 콘텐츠에 대한 관심이 증대되면서 객체인식 등의 지능형 공간인지 기술에 대한 연구가 활발히 진행되고 있다. 특히 HMD등의 영상 시각화 장치의 발달 및 5G 통신기술의 출현으로 인해 실시간 대용량 영상정보의 송, 수신 및 가시화 처리 기술의 기반이 구축됨에 따라, $360^{\circ}$ 스트리밍 영상정보 처리와 같은 고자유도 콘텐츠를 위한 관련 연구의 필요성이 증대되고 있다. 하지만 지능형 영상정보 처리의 대표적 연구인 딥 러닝(Deep Learning) 기반 객체 인식 기술의 경우 대부분 일반적인 평면 영상(Planar Image)에 대한 처리를 다루고 있고, 파노라마 영상(Panorama Image) 특히, $360^{\circ}$ 스트리밍 영상 처리를 위한 연구는 미비한 상황이다. 본 논문에서는 딥 러닝을 이용하여 $360^{\circ}$ 스트리밍 영상에서의 객체인식 연구 방법에 대해 서술한다. 이를 위해 $360^{\circ}$ 카메라 영상에서 딥 러닝을 위한 학습 데이터를 획득하고, 실시간 객체 인식이 가능한 YOLO(You Only Look Once)기법을 이용하여 학습을 한다. 실험 결과에서는 학습 데이터를 이용하여 $360^{\circ}$영상에서 객체 인식 결과와, 학습 횟수에 따른 객체 인식에 대한 결과를 보여준다.

  • PDF