• Title/Summary/Keyword: Immersive Media

Search Result 228, Processing Time 0.022 seconds

Light ID and HMD-AR Based Interactive Exhibition Design for Jeonju Hanok Village Immersive 3D View (전주 한옥마을의 실감 3D View를 위한 Light ID 및 HMD-AR 기반 인터렉티브 전시 설계)

  • Min, Byung-Jun;Mariappan, Vinayagam;Cha, Jae-Sang;Kim, Dae-Young;Cho, Ju-Phil
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.4
    • /
    • pp.414-420
    • /
    • 2018
  • The digital convergence looking for new ways to engage visitors by superimposing virtual content on projection over the real world captured media contents. This paper propose the Light ID based interactive 3D immersive exhibition things view using HMD AR technology. This approach does not required to add any additional infrastructure to be built-in to enable service and uses the installed Lighting or displays devices in the exhibit area. In this approach, the Light ID can be used as a Location Identifier and communication medium to access the content unlike the QR Tag which supports provide the download information through web interface. This utilize the advantages of camera based optical wireless communication (OWC) to receive the media content on smart device to deliver immersive 3D content visualization using AR. The proposed exhibition method is emulated on GALAXY S8 smart phone and the visual performance is evaluated for Jeonju Hanok Village. The experimental results shows that the proposed method can give immersive 3D view for exhibit things in real-time.

Interactive Virtual Studio & Immersive Viewer Environment (인터렉티브 가상 스튜디오와 몰입형 시청자 환경)

  • 김래현;박문호;고희동;변혜란
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.06b
    • /
    • pp.87-93
    • /
    • 1999
  • In this paper, we introduce a novel virtual studio environment where a broadcaster in the virtual set interacts with tele-viewers as if they are sharing the same environment as participants. A tele-viewer participates physically in the virtual studio environment by a dummy-head equipped with video "eyes" and microphone "ears" physically located in the studio. The dummy head as a surrogate of the tole-viewer follows the tele-viewer's head movements and views and hears through the dummy head like a tele-operated robot. By introducing the tele-presence technology in the virtual studio setting, the broadcaster can not only interact with the virtual set elements like the regular virtual studio environment but also share the physical studio with the surrogates of the tele-viewers as participants. The tele-viewer may see the real broadcaster in the virtual set environment and other participants as avatars in place of their respective dummy heads. With an immersive display like HMD, the tele-viewer may look around the studio and interact with other avatars. The new interactive virtual studio with the immersive viewer environment may be applied to immersive tele-conferencing, tele-teaching, and interactive TV program productions.program productions.

  • PDF

Cluster-based MV-HEVC Coding Mode Decision for MPEG Immersive Video (MPEG 몰입형 비디오를 위한 클러스터 기반 MV-HEVC 부호화 모드 결정)

  • Han, Chang-Hee;Jeong, Jong-Beom;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.189-192
    • /
    • 2021
  • three degree of freedom (3DoF), three degree of freedom plus (3DoF+), six degree of freedom(6DoF) 등 몰입형 비디오의 높은 몰입감을 제공하기 위해 다중 비디오 영상을 효율적으로 처리하는 기법이 활발히 연구되고 있다. 이를 위해 원본의 몰입형 비디오가 입력되면 기본 시점 영상과 추가 시점 영상에서의 중복을 제거하고 기본 시점(basic view)에서는 보이지 않지만 추가 시점(additional view)에서는 보이는 영역을 추출하는 프루닝 과정이 이뤄지는 부호기에서의 부호화 모드 결정은 매우 중요하다. 본 논문은 test model for immersive video (TMIV)의 모드 중 하나인 MPEG immersive video (MIV) view mode 를 통해 만들어진 프루닝 (pruning) 그래프에서 선택된 시점들을 활용하여 뷰 간 중복성을 제거할 수 있는 효율적인 부호화 구조로 클러스터를 기반으로 병렬적으로 부호화하는 클러스터 기반 정렬 기법을 제안한다. 선택된 시점들을 인덱스 순서에 따라 부호화하는 기존 방법에 비해 제안하는 방법은 peak signal-to-noise ratio (Y-PSNR)에서 평균 3.9%의 BD-rate 절감을 보여주었다. 본 연구는 또한 더 객관적인 품질 측정을 위해 immersive video peak signal-to-noise ratio (IV-PSNR)에 의한 비교 결과도 함께 제공하며, 참조 순서에 맞게 정렬한 프루닝 기반 정렬 기법과의 비교도 함께 제공한다.

  • PDF

Improving immersive video compression efficiency by reinforcement learning (강화학습 기반 몰입형 영상 압축 성능 향상 기법)

  • Kim, Dongsin;Oh, Byung Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.33-36
    • /
    • 2021
  • In this paper, we propose a new method for improving compression efficiency of immersive video using reinforcement learning. Immersive video means a video that a user can directly experience, such as 3DOF+ videos and Point Cloud videos. It has a vast amount of information due to their characteristics. Therefore, lots of compression methods for immersive video are being studied, and generally, a method, which projects an 3D image into 2D image, is used. However, in this process, a region where information does not exist is created, and it can decrease the compression efficiency. To solve this problem, we propose the reinforcement learning-based filling method with considering the characteristics of images. Experimental results show that the performance is better than the conventional padding method.

  • PDF

Yoga of Consilience through Immersive Sound Experience (실감음향 체험을 통한 통섭의 요가)

  • Hyon, Jinoh
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.643-651
    • /
    • 2021
  • Most people acquire information visually. Screens of computers, smart phones, etc. constantly stimulate people's eyes, increasing fatigue. In this social phenomenon, the realistic and rich sound of the 21st century's state-of-art sound system can affect people's bodies and minds in various ways. Through sound, human beings are given space to calm and observe themselves. The purpose of this paper is to introduce immersive yoga training based on 3D sound conducted together by ALgruppe & Rory's PranaLab and to promote the understanding of immersive audio system. As a result, people, experienced immersive yoga, not only enjoy the effect of sound, but also receive a powerful energy that gives them a sense of inner self-awareness. This is a response to multidisciplinary exchange required by the knowledge of modern society, and at the same time, informs the possibility of new cultural contents.

Towards Group-based Adaptive Streaming for MPEG Immersive Video (MPEG Immersive Video를 위한 그룹 기반 적응적 스트리밍)

  • Jong-Beom Jeong;Soonbin Lee;Jaeyeol Choi;Gwangsoon Lee;Sangwoon Kwak;Won-Sik Cheong;Bongho Lee;Eun-Seok Ryu
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.194-212
    • /
    • 2023
  • The MPEG immersive video (MIV) coding standard achieved high compression efficiency by removing inter-view redundancy and merging the residuals of immersive video which consists of multiple texture (color) and geometry (depth) pairs. Grouping of views that represent similar spaces enables quality improvement and implementation of selective streaming, but this has not been actively discussed recently. This paper introduces an implementation of group-based encoding into the recent version of MIV reference software, provides experimental results on optimal views and videos per group, and proposes a decision method for optimal number of videos for global immersive video representation by using portion of residual videos.

Group-based Adaptive Rendering for 6DoF Immersive Video Streaming (6DoF 몰입형 비디오 스트리밍을 위한 그룹 분할 기반 적응적 렌더링 기법)

  • Lee, Soonbin;Jeong, Jong-Beom;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.216-227
    • /
    • 2022
  • The MPEG-I (Immersive) group is working on a standardization project for immersive video that provides 6 degrees of freedom (6DoF). The MPEG Immersion Video (MIV) standard technology is intended to provide limited 6DoF based on depth map-based image rendering (DIBR) technique. Many efficient coding methods have been suggested for MIV, but efficient transmission strategies have received little attention in MPEG-I. This paper proposes group-based adaptive rendering method for immersive video streaming. Each group can be transmitted independently using group-based encoding, enabling adaptive transmission depending on the user's viewport. In the rendering process, the proposed method derives weights of group for view synthesis and allocate high quality bitstream according to a given viewport. The proposed method is implemented through the Test Model for Immersive Video (TMIV) test model. The proposed method demonstrates 17.0% Bjontegaard-delta rate (BD-rate) savings on the peak signalto-noise ratio (PSNR) and 14.6% on the Immersive Video PSNR(IV-PSNR) in terms of various end-to-end evaluation metrics in the experiment.

Metaverse Realistic Media Digital Content Development Education Environment Improvement Research

  • Kyoung-A, Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.67-73
    • /
    • 2023
  • Under the influence of COVID-19, as a measure of social distancing for about two years and one month, non-face-to-face services using ICT element technology are expanding not only to the education sector but to all fields. In particular, as educational programs using the Metaverse platform spread to various fields, educators, and learners have more learning experiences using Edutech, but problems through non-face-to-face learning such as reduced immersion or concentration in education are raising In this paper, to overcome the problems raised through non-face-to-face learning and develop metaverse immersive media digital contents to improve the educational environment, we utilize VR (Virtual Reality) based on an immersive metaverse to provide education / Training contents and the educational environment was established. In this paper, we presented a system to increase immersion and concentration in educational contents in a virtual environment using HMD (Head Mounted Display) for learners who are put into military education/training. Immersion was further improved.

A Study on Immersive Content Production and Storytelling Methods using Photogrammetry and Artificial Intelligence Technology (포토그래메트리 및 인공지능 기술을 활용한 실감 콘텐츠 제작과 스토리텔링 방법 연구)

  • Kim, Jungho;Park, JinWan;Yoo, Taekyung
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.654-664
    • /
    • 2022
  • Immersive content overcomes spatial limitations through convergence with extended reality, artificial intelligence, and photogrammetry technology along with interest due to the COVID-19 pandemic, presenting a new paradigm in the content market such as entertainment, media, performances, and exhibitions. However, it can be seen that in order for realistic content to have sustained public interest, it is necessary to study storytelling method that can increase immersion in content rather than technological freshness. Therefore, in this study, we propose a immersive content storytelling method using artificial intelligence and photogrammetry technology. The proposed storytelling method is to create a content story through interaction between interactive virtual beings and participants. In this way, participation can increase content immersion. This study is expected to help content creators in the accelerating immersive content market with a storytelling methodology through virtual existence that utilizes artificial intelligence technology proposed to content creators to help in efficient content creation. In addition, I think that it will contribute to the establishment of a immersive content production pipeline using artificial intelligence and photogrammetry technology in content production.

Tiled Stereo Display System for Immersive Telemeeting

  • Kim, Ig-Jae;Ahn, Sang-Chul;Kim, Hyoung-Gon
    • Journal of Information Display
    • /
    • v.8 no.4
    • /
    • pp.27-31
    • /
    • 2007
  • In this paper, we present an efficient tiled stereo display system for tangible meeting. For tangible meeting, it is important to provide immersive display with high resolution image to cover up the field of view and provide to the local user the same environment as that of remote site. To achieve these, a high resolution image needs to be transmitted for reconstruction of remote world, and it should be displayed using a tiled display. However, it is hard to transmit high resolution image in real time due to the limit of network bandwidth, and so we receive multiple images and reconstruct a remote world with received images in advance. Then, we update only a specific area where remote user exists by receiving low resolution image in realtime. We synthesize the transmitted image to the existing environmental map of remote world and display it as a stereo image. For this, we developed a new system which supports GPU based real time warping and blending, automatic feature extraction using machine vision technique.