• Title/Summary/Keyword: 360 video

Search Result 173, Processing Time 0.024 seconds

Comparison of experience recognition in 360° virtual reality videos and common videos (360° 가상현실 동영상과 일반 동영상 교육 콘텐츠의 경험인식 비교 분석)

  • Jung, Eun-Kyung;Jung, Ji-Yeon
    • The Korean Journal of Emergency Medical Services
    • /
    • v.23 no.3
    • /
    • pp.145-154
    • /
    • 2019
  • Purpose: This study simulates cardiac arrest situations in 360° virtual reality video clips and general video clips, and compares the correlations between educational media and experience recognition. Methods: Experimental research was carried out on a random control group (n=32) and experimental group (n=32) on March 20, 2019. Results: The groups where participants were trained with the 360° virtual reality video clips and a higher score of experience recognition (p=.047) than the group where participants were trained with the general video clips. Moreover, the subfactors of experience recognition including the sense of presence and vividness (p=.05), immersion (p<.05). experience (p<.01), fantasy factor (p<.05). and content satisfaction (p<.05) were positively correlated. Conclusion: Enhancing vividness and the sense of presence when developing virtual reality videos recorded with a 360° camera is thought to enable experience recognition without any direct interaction.

Implementing VVC Tile Extractor for 360-degree Video Streaming Using Motion-Constrained Tile Set

  • Jeong, Jong-Beom;Lee, Soonbin;Kim, Inae;Lee, Sangsoon;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.25 no.7
    • /
    • pp.1073-1080
    • /
    • 2020
  • 360-degree video streaming technologies have been widely developed to provide immersive virtual reality (VR) experiences. However, high computational power and bandwidth are required to transmit and render high-quality 360-degree video through a head-mounted display (HMD). One way to overcome this problem is by transmitting high-quality viewport areas. This paper therefore proposes a motion-constrained tile set (MCTS)-based tile extractor for versatile video coding (VVC). The proposed extractor extracts high-quality viewport tiles, which are simulcasted with low-quality whole video to respond to unexpected movements by the user. The experimental results demonstrate a savings of 24.81% in the bjøntegaard delta rate (BD-rate) saving for the luma peak signal-to-noise ratio (PSNR) compared to the rate obtained using a VVC anchor without tiled streaming.

A Study on Effective Stitching Technique of 360° Camera Image (360° 카메라 영상의 효율적인 스티칭 기법에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.335-341
    • /
    • 2018
  • This study is a study on effective stitching technique for video recorded by using a dual-lens $360^{\circ}$ camera composed of two fisheye lenses. First of all, this study located a problem in the result of stitching by using a bundled program. And the study was carried out, focusing on looking for a stitching technique more efficient and closer to perfect by comparatively analyzing the results of stitching by using Autopano Video Pro and Autopano Giga, professional stitching program. As a result, it was shown that the problems of bundled program were horizontal and vertical distortion, exposure and color mismatch and unsmooth stitching line. And it was possible to solve the problem of the horizontal and vertical by using Automatic Horizon and Verticals Tool of Autopano Video Pro and Autopano Giga, problem of exposure and color by using Levels, Color and Edit Color Anchors and problem of stitching line by using Mask function. Based on this study, it is to be hoped that $360^{\circ}$ VR video content closer to perfect can be produced by efficient stitching technique for video recorded by using dual-lens $360^{\circ}$ camera in the future.

A Reference Frame Selection Method Using RGB Vector and Object Feature Information of Immersive 360° Media (실감형 360도 미디어의 RGB 벡터 및 객체 특징정보를 이용한 대표 프레임 선정 방법)

  • Park, Byeongchan;Yoo, Injae;Lee, Jaechung;Jang, Seyoung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1050-1057
    • /
    • 2020
  • Immersive 360-degree media has a problem of slowing down the video recognition speed when the video is processed by the conventional method using a variety of rendering methods, and the video size becomes larger with higher quality and extra-large volume than the existing video. In addition, in most cases, only one scene is captured by fixing the camera in a specific place due to the characteristics of the immersive 360-degree media, it is not necessary to extract feature information from all scenes. In this paper, we propose a reference frame selection method for immersive 360-degree media and describe its application process to copyright protection technology. In the proposed method, three pre-processing processes such as frame extraction of immersive 360 media, frame downsizing, and spherical form rendering are performed. In the rendering process, the video is divided into 16 frames and captured. In the central part where there is much object information, an object is extracted using an RGB vector per pixel and deep learning, and a reference frame is selected using object feature information.

Method for Applying Wavefront Parallel Processing on Cubemap Video (큐브맵 영상에 Wavefront 병렬 처리를 적용하는 방법)

  • Hong, Seok Jong;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.401-404
    • /
    • 2017
  • The 360 VR video has a format of a stereoscopic shape such as an isometric shape or a cubic shape or a cubic shape. Although these formats have different characteristics, they have in common that the resolution is higher than that of a normal 2D video. Therefore, it takes much longer time to perform coding/decoding on 360 VR video than 2D Video, so parallel processing techniques are essential when it comes to coding 360 VR video. HEVC, the state of art 2D video codec, uses Wavefront Parallel Processing (WPP) technology as a standard for parallelization. This technique is optimized for 2D videos and does not show optimal performance when used in 3D videos. Therefore, a suitable method for WPP is required for 3D video. In this paper, we propose WPP coding/decoding method which improves WPP performance on cube map format 3D video. The experiment was applied to the HEVC reference software HM 12.0. The experimental results show that there is no significant loss of PSNR compared with the existing WPP, and the coding complexity of 15% to 20% is further reduced. The proposed method is expected to be included in the future 3D VR video codecs.

The Implementation of Information Providing Method System for Indoor Area by using the Immersive Media's Video Information (실감미디어 동영상정보를 이용한 실내 공간 정보 제공 시스템 구현)

  • Lee, Sangyoon;Ahn, Heuihak
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.3
    • /
    • pp.157-166
    • /
    • 2016
  • This paper presents the interior space information using 6D-360 degree immersive media video information. And we implement the augmented reality, which includes a variety of information such as position information, movement information of the specific location in the interior space GPS signal does not reach the position information. Augmented reality containing the 6D-360 degree immersive media video information provides the position information and the three dimensional space image information to identify the exact location of a user in an interior space of a moving object as well as a fixed interior space. This paper constitutes a three dimensional image database based on the 6D-360 degree immersive media video information and provides augmented reality service. Therefore, to map the various information to 6D-360 degree immersive media video information, the user can check the plant in the same environment as the actual. It suggests the augmented reality service for the emergency escape and repair to the passengers and employees.

Luminance Compensation using Feature Points and Histogram for VR Video Sequence (특징점과 히스토그램을 이용한 360 VR 영상용 밝기 보상 기법)

  • Lee, Geon-Won;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.808-816
    • /
    • 2017
  • 360 VR video systems has become important to provide immersive effect for viewers. The system consists of stitching, projection, compression, inverse projection, viewport extraction. In this paper, an efficient luminance compensation technique for 360 VR video sequences, where feature extraction and histogram equalization algorithms are utilized. The proposed luminance compensation algorithm enhance the performance of stitching in 360 VR system. The simulation results showed that the proposed technique is useful to increase the quality of the displayed image.

A Study on Fingerprinting Robustness Indicators for Immersive 360-degree Video (실감형 360도 영상 특징점 기술 강인성 지표에 관한 연구)

  • Kim, Youngmo;Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon
    • Journal of IKEEE
    • /
    • v.24 no.3
    • /
    • pp.743-753
    • /
    • 2020
  • In this paper, we propose a set of robustness indicators for immersive 360-degree video. With the full-fledged service of mobile carriers' 5G networks, it is possible to use large-capacity, immersive 360-degree videos at high speed anytime, anywhere. Since it can be illegally distributed in web-hard and torrents through DRM dismantling and various video modifications, however, evaluation indicators that can objectively evaluate the filtering performance for copyright protection are required. In this paper, a robustness indicators is proposed that applies the existing 2D Video robustness indicators and considers the projection method and reproduction method, which are the characteristics of Immersive 360-degree Video. The performance evaluation experiment has been carried out for a sample filtering system and it is verified that an excellent recognition rate of 95% or more has been achieved in about 3 second execution time.

Performance Analysis of Hybrid Equiangular Cubemap (HEC) for 360 Video Coding (360 비디오 부호화를 위한 HEC 투영 성능분석)

  • Kim, Nam-Cheol;Kim, Hyun-Ho;Yoon, Yong-Uk;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.141-142
    • /
    • 2018
  • 360 비디오는 VR 미디어의 확산과 함께 몰입형 미디어로 주목 받고 있으며, JVET(Joint Video Experts Team)에서는 post-HEVC 로 진행중인 VVC(Versatile Video Coding) 표준화에 360 비디오 부호화도 함께 포함하고 있다. 현재 JVET 에서는 360 비디오를 부호화 하기 위한 다양한 구(sphere) 영상의 2D 투영기법이 고려되고 있다. 이러한 2D 투영에서는 구 영상의 화소 샘플이 2D 영상에 비 균일하게 매핑되는 변환 왜곡이 발생하며, 이는 360 비디오의 부호화 효율을 저하시키는 원인이 된다. 본 논문에서는 CMP 의 개선된 투영기법인 기존의 EAC(Equi-Angular Cubemap)와 HEC(Hybrid Equiangular Cubemap)를 소개하고, 이를 바탕으로 HEC 의 확장 변환 기법을 제시하여 객관적/주관적 부호화 성능을 확인한다.

  • PDF

Standardization Trend of 3DoF+ Video for Immersive Media (이머시브미디어를 3DoF+ 비디오 부호화 표준 동향)

  • Lee, G.S.;Jeong, J.Y.;Shin, H.C.;Seo, J.I.
    • Electronics and Telecommunications Trends
    • /
    • v.34 no.6
    • /
    • pp.156-163
    • /
    • 2019
  • As a primitive immersive video technology, a three degrees of freedom (3DoF) $360^{\circ}$ video can currently render viewport images that are dependent on the rotational movements of the viewer. However, rendering a flat $360^{\circ}$ video, that is supporting head rotations only, may generate visual discomfort especially when objects close to the viewer are rendered. 3DoF+ enables head movements for a seated person adding horizontal, vertical, and depth translations. The 3DoF+ $360^{\circ}$ video is positioned between 3DoF and six degrees of freedom, which can realize the motion parallax with relatively simple virtual reality software in head-mounted displays. This article introduces the standardization trends for the 3DoF+ video in the MPEG-I visual group.