• Title/Summary/Keyword: 360 video

Search Result 173, Processing Time 0.025 seconds

Adaptation of VR 360-degree Intravenous Infusion Educational Content for Nursing Students (간호대학생을 위한 가상현실(VR) 360도 정맥수액주입 교육용 콘텐츠의 적용)

  • Park, Jung-Ha
    • The Journal of the Convergence on Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.165-170
    • /
    • 2020
  • In this study, after applying VR 360-degree video contents for intravenous infusion education, basic data on whether VR 360-degree video can be applied as educational content in the future is prepared by grasping the empathy and flow of nursing students in graduating grades. The VR 360 degree intravenous infusion educational content was developed in four-step process of planning, production, modification and completion. The design of this study was descriptive research, and the study period was from November 9 to November 22, 2019. The subjects of this study were 4th grade nursing students at a university, totaling 64 students. Nursing students watched VR 360 degree intravenous infusion educational content using HMD(head mounted display) under the safety management of the researcher. As a result of the study, the empathy of nursing students was 5.32±0.88 points and the flow was 6.02±0.84 points out of 7-point scale. The VR 360 degree intravenous infusion educational content developed in this study can be used as an educational medium in subjects and comparative departments, and it is necessary to specifically develop and verify teaching and learning methods in future studies.

Projection format and quality metrics of 360 video (360 VR 영상의 프로젝션 포맷 및 성능 평가 방식)

  • Park, Seong-Hwan;Kim, Kyu-Heon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.182-184
    • /
    • 2019
  • 최근 사용자에게 더욱 몰입감 있는 콘텐츠를 제공하기 위한 기술에 대한 관심이 증가하고 있으며 그 중 가장 대표적인 것이 360 VR 영상이라고 할 수 있다. 미디어 표준화 단체인 MPEG(Moving Picture Experts Group)에서는 MPEG-I(Immersive) 차세대 프로젝트 그룹을 이용하여 이러한 움직임에 대응하고 있다. MPEG-I는 2021년 말 6DoF VR 영상을 목표로 8개의 파트가 표준화를 진행중이다. 360 VR 영상의 경우 획득시 영상의 픽셀들이 3D 공간 상에 존재하게 되는데, 이를 처리 및 출력 하귀 위해서는 2D 영상으로 전환이 필요하며 이 때 사용되는 것이 Projection format이다. 현재 JVET(Joint Video Exploration Team)에서는 3D에서 2D로 전환이 이루어 질 때 손실을 최소화 하기 위한 Projection format들에 대한 연구가 이루어 지고 있다. 본 논문에서는 현재까지 제안된 다양한 Projection format들에 대하여 소개하고 이에 대한 성능 측정 방식에 대하여 소개한다.

  • PDF

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

Efficient Transmission Scheme with Viewport Prediction of 360VR Content using Sound Location Information (360VR 콘텐츠의 음원위치정보를 활용한 시점예측 전송기법)

  • Jeong, Eunyoung;Kim, Dong Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1002-1012
    • /
    • 2019
  • 360VR content requires short latency, such as immediate response to viewers' viewport changes and high quality video delivery. It is necessary to consider efficient transmission that guarantees the QoE(Quality of Experience) of the 360VR contents with limited bandwidth. Several research has been introduced to reduce overall bandwidth consumption by predicting a user's viewport and allocating different bit rates to the area corresponding to the viewport. In this paper, we propose novel viewport prediction scheme that uses sound source location information of 360VR contents as auditory recognition information along with visual recognition information. Also, we propose efficient transmission algorithm by allocating a bit rate properly based on improved viewport prediction. The proposed scheme improves the accuracy of the viewport prediction and provides high quality videos to tiles corresponding to the user's viewpoint within the limited bandwidth.

Method of creating augmented saliency map for 360-degree video (360 도 비디오의 객체 증강 saliency map 생성 방법)

  • Shim, Yoojeong;Seo, Jimin;Lee, Myeong-jin
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.109-111
    • /
    • 2021
  • 360 도 영상은 기존 미디어와 다른 몰입감을 제공하지만 HMD 기반 시청은 멀미, 신체적 불편함 등을 유발할 수 있다. 또한, 시청 디바이스 보급 문제, 네트워크 대역의 문제, 단일 소스 다중 이용의 수요 등으로 일반 디스플레이 기반 서비스 수요도 존재한다. 본 논문에서는 360 도 영상의 일반 디스플레이 서비스를 위한 뷰포트 추출에 필요한 영상 내 객체의 동적 속성을 활용한 시각적 관심 지도 증강 기법과 이를 이용한 서비스 구조를 제시한다.

  • PDF

An Efficient Frame Packing Method for Icosahedral Projection (ISP) in 360 Video (360 비디오의 ISP 를 위한 효과적인 프레임 패킹 기법)

  • Kim, Hyun-Ho;Yoon, Yong-Uk;Park, Do-Hyeon;Kim, Jae-Gon
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.11a
    • /
    • pp.6-7
    • /
    • 2017
  • 360 비디오는 몰입감을 제공해주는 새로운 타입의 미디어로 최근 그 주목도가 더해져 가고 있다. 이에 따라 차세대 비디오 표준 기술 탐색을 진행하고 있는 JVET(Joint Video Exploration Team)에서는 360 비디오를 SDR 및 HDR 비디오와 함께 표준화 대상으로 논의되고 있다. 현재 JVET 에서는 360 비디오를 부호화 하기 위한 다양한 2D 투영기법이 제시되고 있다. 2D 로 변환된 영상은 투영 면(face) 간의 불연속성과 비활성 영역이 존재할 수 있으며 이는 부호화 효율을 저하시키는 원인이 된다. 본 논문에서는 ISP(Icosahedral Projection)에서의 이러한 불연속성과 비활성 영역을 줄이는 효과적인 프레임 패킹(packing) 기법을 제시한다. 제안 기법은 투영면들 간의 불연속 경계면을 효율적으로 배치하여 주관적 화질과 부호화 효율을 향상시킨다. 실험결과 기존 CISP(Compact ISP) 대비 1.0%, 1.0%, 1.27%, 0.63%의 BD-rate 감소를 확인 할 수 있었다. 또한 기존 CISP 대비 주관적 화질이 향상된 것을 확인 할 수 있었다.

  • PDF

Multiple GPU Scheduling for Improved Acquisition of Real-Time 360 VR Game Video (실시간 360 VR 스테레오 게임 영상 획득 성능 개선을 위한 다중 GPU 스케줄링에 관한 연구)

  • Lee, Junsuk;Paik, Joonki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.974-982
    • /
    • 2019
  • Real-time 360 VR (Virtual Reality) stereo image acquisition technique based on game engine was proposed. However, GPU (Graphics Processing Unit) resource is not fully utilized due to bottlenecks. In this paper, we propose an improved GPU scheduling technique to solve the bottleneck of the existing technique and measure the performance of the proposed technique using the sample games of the commercial game engine. As a result, proposed technique showed an improvement of performance up to 70% and usage of GPU resources more evenly compared existing technique.

A Study on Robustness Indicators for Performance Evaluation of Immersive 360-degree Video Filtering (실감형 360도 영상 필터링 성능 평가를 위한 강인성 지표에 관한 연구)

  • Jang, Seyoung;Yoo, Injae;Lee, Jaecheng;Park, Byeongchan;Kim, Youngmo;Kim, Seok-Yoon
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.07a
    • /
    • pp.437-438
    • /
    • 2020
  • 국내 실감형 콘텐츠 시장은 전년도비 42.9%의 연평균 성장률을 보이며 2020년에는 약 5조 7,271억원의 규모에 이를 것으로 전망된다. 특히 2018년 기점으로 하드웨어보다는 콘텐츠 시장이 확대되었다. 최근 실감형 콘텐츠의 유통이 본격적으로 시작됨에 따라 저작권 침해 사례들이 나타나고 있으나 시장의 저변확대 측면에서 그렇게 주목받지 못하고 있다. 실감형 저작물을 제작하는 업체가 주로 소기업이고, 제작하는 비용이 고비용인 점을 고려할 때 저작권 보호 기술인 필터링 기술이 절대적으로 요구되고 있다. 필터링 기술의 성능 평가할 기준인 강인성 지표가 미정립 된 상태이다. 따라서 본 논문에서는 특정 기술에 종속되지 않는 실감형 360도 영상 콘텐츠 강인성 지표를 제안한다.

  • PDF

A Study on the Development of Camera Gimbal System for Unmanned Flight Vehicle with VR 360 Degree Omnidirectional Photographing (360도 VR 촬영을 위한 무인 비행체용 카메라 짐벌 시스템 개발에 관한 연구)

  • Jung, Nyum;Kim, Sang-Hoon
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.11 no.8
    • /
    • pp.767-772
    • /
    • 2016
  • The purpose of this paper is to develop a gimbal system installed in the UFV(unmanned flight vehicles) for 360 degree VR video. In particular, even if the UFV rotated any direction the camera position is fiexd to minimize the shaking using the gyro sensor and the camera system is stable for taking $360^{\circ}$ panorama VR images.

Performance Analysis on View Synthesis of 360 Videos for Omnidirectional 6DoF in MPEG-I (MPEG-I의 6DoF를 위한 360 비디오 가상시점 합성 성능 분석)

  • Kim, Hyun-Ho;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.273-280
    • /
    • 2019
  • 360 video is attracting attention as immersive media with the spread of VR applications, and MPEG-I (Immersive) Visual group is actively working on standardization to support immersive media experiences with up to six degree of freedom (6DoF). In virtual space of omnidirectional 6DoF, which is defined as a case of degree of freedom providing 6DoF in a restricted area, looking at the scene at any viewpoint of any position in the space requires rendering the view by synthesizing additional viewpoints called virtual omnidirectional viewpoints. This paper presents the performance results on view synthesis and their analysis, which have been done as exploration experiments (EEs) of omnidirectional 6DoF in MPEG-I. In other words, experiment results on view synthesis in various aspects of synthesis conditions such as the distances between input views and virtual view to be synthesized and the number of input views to be selected from the given set of 360 videos providing omnidirectional 6DoF are presented.