• Title/Summary/Keyword: 360 VR Video

Search Result 71, Processing Time 0.026 seconds

Improve Compression Efficiency of 360degree VR Video by Correcting Perspective in Cubemap Projection (Cubemap Projection 360도 VR 비디오에서 시점 보정을 통한 압축 효율 향상 방법)

  • Yoon, Sung Jea;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.1
    • /
    • pp.136-139
    • /
    • 2017
  • Recently, many companies and consumers has shown a lot of interest toward VR(Virtual Reality), so many VR devices such as HMD(Head mounted Display) and 360 degree VR camera are released on the market. Current encoded 360 degree VR video uses the codec which originally made for the conventional 2D video. Therefore, the compression efficiency isn't optimized because the en/decoder does not consider the characteristics of the 360 degree VR video. In this paper, we propose a method to improve the compression efficiency by using the reference frame which compensates for the distortions caused by characteristics the 360 degree VR video. Applying the proposed method we were able to increase the compression efficiency by providing better prediction.

Study of Capturing Real-Time 360 VR 3D Game Video for 360 VR E-Sports Broadcast (360 VR E-Sports 중계를 위한 실시간 360 VR 3D Stereo 게임 영상 획득에 관한 연구)

  • Kim, Hyun Wook;Lee, Jun Suk;Yang, Sung Hyun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.6
    • /
    • pp.876-885
    • /
    • 2018
  • Although e-sports broadcasting market based on VR(Virtual Reality) is growing in these days, technology development for securing market competitiveness is quite inadequate in Korea. Global companies such as SLIVER and Facebook already developed and are trying to commercialize 360 VR broadcasting technology which is able to broadcast e-sports in 4K 30FPS VR video. However, 2D video is too poor to use for 360 VR video in that it brings less immersive experience and dizziness and has low resolution in the scene. this paper, we not only proposed and implemented virtual camera technology which is able to capture in-game space as 360 video with 4K 3D by 60FPS for e-sports VR broadcasting but also verified feasibleness of obtaining stereo 360 video up to 4K/60FPS by conducting experiment after setting up virtual camera in sample games from game engine and commercial games.

Arrangement of narrative events and background in the contents of VR 360 video (VR 360 영상 콘텐츠에서의 서사적 사건 및 배경의 배치)

  • Lee, You-Na;Park, Jin-Wan
    • Journal of Digital Contents Society
    • /
    • v.19 no.9
    • /
    • pp.1631-1639
    • /
    • 2018
  • VR 360 video contents requires new visual language research in that the viewer inevitably makes partial appreciation unlike traditional video contents. In this study, we paid attention to the fact that arrangement of events and background elements in the 360-degree extended background of VR 360 video contents will play a major role in guiding the audience. Therefore, this study focuses on the arrangement of events and background elements from a narrative point of view, and analyzed the aspects of VR 360 video contents cases.

A Study on the High Quality 360 VR Tiled Video Edge Streaming (방송 케이블 망 기반 고품질 360 VR 분할 영상 엣지 스트리밍에 관한 연구)

  • Kim, Hyun-Wook;Yang, Jin-Wook;Yoon, Sang-Pil;Jang, Jun-Hwan;Park, Woo-Chool
    • Journal of the Korea Convergence Society
    • /
    • v.10 no.12
    • /
    • pp.43-52
    • /
    • 2019
  • 360 Virtual Reality(VR) service is getting attention in the domestic streaming market as 5G era is upcoming. However, existing IPTV-based 360 VR video services use upto 4K 360 VR video which is not enough to satisfy customers. It is generally required that over 8K resolution is necessary to meet users' satisfaction level. The bit rate of 8K resolution video exceeds the bandwidth of single QAM channel(38.817mbps), which means that it is impossible to provide 8K resolution video via the IPTV broadcast network environment. Therefore, we suggest and implement the edge streaming system for low-latency streaming to the display devices in the local network. We conducted experiments and confirmed that 360 VR streaming with a viewport switching delay less than 500ms can be achieved while using less than 100mbps of the network bandwidth.

Implementation of 360 VR Tiled Video Player with Eye Tacking based Foveated Rendering (시점 추적 기반 Foveated Rendering을 지원하는 360 VR Tiled Video Player 구현)

  • Kim, Hyun Wook;Yang, Sung Hyun
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.7
    • /
    • pp.795-801
    • /
    • 2018
  • In these days, various technologies to provide a service of high quality of 360 VR media contents is being studied and developed. However, rendering high-quality of media images is very difficult with the limited resources of HMD (Head Mount Display). In this paper, we designed and implemented a 360 VR Player for high quality 360 tiled video image render to HMD. Furthermore, we developed multi-resolution-based Foveated Rendering technology. By conducting several experiments, We have confirmed that it improved the performance of video rendering far more than existing tiled video rendering technology.

Real-Time Compressed Video Acquisition System for Stereo 360 VR (Stereo 360 VR을 위한 실시간 압축 영상 획득 시스템)

  • Choi, Minsu;Paik, Joonki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.965-973
    • /
    • 2019
  • In this paper, Stereo 4K@60fps 360 VR real-time video capture system which consists of video stream capture, video encoding and stitching module is been designed. The system captures stereo 4K@60fps 360 VR video by stitching 6 of 2K@60fps stream which are captured through HDMI interface from 6 cameras in real-time. In video capture phase, video is captured from each camera using multi-thread in real-time. In video encoding phase, raw frame memory transmission and parallel encoding are used to reduce the resource usage in data transmission between video capture and video stitching modules. In video stitching phase, Real-time stitching is secured by stitching calibration preprocessing.

Luminance Compensation using Feature Points and Histogram for VR Video Sequence (특징점과 히스토그램을 이용한 360 VR 영상용 밝기 보상 기법)

  • Lee, Geon-Won;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.808-816
    • /
    • 2017
  • 360 VR video systems has become important to provide immersive effect for viewers. The system consists of stitching, projection, compression, inverse projection, viewport extraction. In this paper, an efficient luminance compensation technique for 360 VR video sequences, where feature extraction and histogram equalization algorithms are utilized. The proposed luminance compensation algorithm enhance the performance of stitching in 360 VR system. The simulation results showed that the proposed technique is useful to increase the quality of the displayed image.

MPEG Omnidirectional Media Format (OMAF) for 360 Media (360 미디어를 위한 MPEG Omnidirectional Media Format (OMAF) 표준 기술)

  • Oh, Sejin
    • Journal of Broadcast Engineering
    • /
    • v.22 no.5
    • /
    • pp.600-607
    • /
    • 2017
  • Virtual Reality (VR) has lately gained significant attention primarily driven by the recent market availability of consumer devices, such as mobile phone-based Head Mounted Displays (HMDs). Apart from classic gaming use cases, the delivery of $360^{\circ}$ video is considered as another major application and is expected to be ubiquitous in the near future. However, the delivery and decoding of high-resolution $360^{\circ}$ videos in desirable quality is a challenging task due to network limitations and constraints on available end device decoding and processing. In this paper, we focus on aspects of $360^{\circ}$ video streaming and provide an overview and discussion of possible solutions as well as considerations for future VR video streaming applications. This paper mainly focuses on the status of the standardization activities, Omnidirectional MediA Format (OMAF), to support interoperable $360^{\circ}$ video streaming services. More concretely, MPEG's ongoing work for OMA aims at harmonization of VR video platforms and applications. The paper also discusses the integration in MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH), which is considered as $360^{\circ}$ video streaming services with OMAF content. In context of the general OMAF service architecture.

Study on Compositing Editing of 360˚ VR Actual Video and 3D Computer Graphic Video (360˚ VR 실사 영상과 3D Computer Graphic 영상 합성 편집에 관한 연구)

  • Lee, Lang-Goo;Chung, Jean-Hun
    • Journal of Digital Convergence
    • /
    • v.17 no.4
    • /
    • pp.255-260
    • /
    • 2019
  • This study is about an efficient synthesis of $360^{\circ}$ video and 3D graphics. First, the video image filmed by a binocular integral type $360^{\circ}$ camera was stitched, and location values of the camera and objects were extracted. And the data of extracted location values were moved to the 3D program to create 3D objects, and the methods for natural compositing was researched. As a result, as the method for natural compositing of $360^{\circ}$ video image and 3D graphics, rendering factors and rendering method were derived. First, as for rendering factors, there were 3D objects' location and quality of material, lighting and shadow. Second, as for rendering method, actual video based rendering method's necessity was found. Providing the method for natural compositing of $360^{\circ}$ video image and 3D graphics through this study process and results is expected to be helpful for research and production of $360^{\circ}$ video image and VR video contents.

Method for Applying Wavefront Parallel Processing on Cubemap Video (큐브맵 영상에 Wavefront 병렬 처리를 적용하는 방법)

  • Hong, Seok Jong;Park, Gwang Hoon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.401-404
    • /
    • 2017
  • The 360 VR video has a format of a stereoscopic shape such as an isometric shape or a cubic shape or a cubic shape. Although these formats have different characteristics, they have in common that the resolution is higher than that of a normal 2D video. Therefore, it takes much longer time to perform coding/decoding on 360 VR video than 2D Video, so parallel processing techniques are essential when it comes to coding 360 VR video. HEVC, the state of art 2D video codec, uses Wavefront Parallel Processing (WPP) technology as a standard for parallelization. This technique is optimized for 2D videos and does not show optimal performance when used in 3D videos. Therefore, a suitable method for WPP is required for 3D video. In this paper, we propose WPP coding/decoding method which improves WPP performance on cube map format 3D video. The experiment was applied to the HEVC reference software HM 12.0. The experimental results show that there is no significant loss of PSNR compared with the existing WPP, and the coding complexity of 15% to 20% is further reduced. The proposed method is expected to be included in the future 3D VR video codecs.