• Title/Summary/Keyword: immersive

Search Result 684, Processing Time 0.03 seconds

MPEG-I Immersive Audio Standardization Trend (MPEG-I Immersive Audio 표준화 동향)

  • Kang, Kyeongok;Lee, Misuk;Lee, Yong Ju;Yoo, Jae-hyoun;Jang, Daeyoung;Lee, Taejin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.723-733
    • /
    • 2020
  • In this paper, MPEG-I Immersive Audio Standardization and related trends are presented. MPEG-I Immersive Audio, which is under the development of standard documents at the exploration stage, can make a user interact with a virtual scene in 6 DoF manner and perceive sounds realistic and matching the user's spatial audio experience in the real world, in VR/AR environments that are expected as killer applications in hyper-connected environments such as 5G/6G. In order to do this, MPEG Audio Working Group has discussed the system architecture and related requirements for the spatial audio experience in VR/AR, audio evaluation platform (AEP) and encoder input format (EIF) for assessing the performance of submitted proponent technologies, and evaluation procedures.

Yoga of Consilience through Immersive Sound Experience (실감음향 체험을 통한 통섭의 요가)

  • Hyon, Jinoh
    • Journal of Broadcast Engineering
    • /
    • v.26 no.5
    • /
    • pp.643-651
    • /
    • 2021
  • Most people acquire information visually. Screens of computers, smart phones, etc. constantly stimulate people's eyes, increasing fatigue. In this social phenomenon, the realistic and rich sound of the 21st century's state-of-art sound system can affect people's bodies and minds in various ways. Through sound, human beings are given space to calm and observe themselves. The purpose of this paper is to introduce immersive yoga training based on 3D sound conducted together by ALgruppe & Rory's PranaLab and to promote the understanding of immersive audio system. As a result, people, experienced immersive yoga, not only enjoy the effect of sound, but also receive a powerful energy that gives them a sense of inner self-awareness. This is a response to multidisciplinary exchange required by the knowledge of modern society, and at the same time, informs the possibility of new cultural contents.

A Reference Frame Selection Method Using RGB Vector and Object Feature Information of Immersive 360° Media (실감형 360도 미디어의 RGB 벡터 및 객체 특징정보를 이용한 대표 프레임 선정 방법)

  • Park, Byeongchan;Yoo, Injae;Lee, Jaechung;Jang, Seyoung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.4
    • /
    • pp.1050-1057
    • /
    • 2020
  • Immersive 360-degree media has a problem of slowing down the video recognition speed when the video is processed by the conventional method using a variety of rendering methods, and the video size becomes larger with higher quality and extra-large volume than the existing video. In addition, in most cases, only one scene is captured by fixing the camera in a specific place due to the characteristics of the immersive 360-degree media, it is not necessary to extract feature information from all scenes. In this paper, we propose a reference frame selection method for immersive 360-degree media and describe its application process to copyright protection technology. In the proposed method, three pre-processing processes such as frame extraction of immersive 360 media, frame downsizing, and spherical form rendering are performed. In the rendering process, the video is divided into 16 frames and captured. In the central part where there is much object information, an object is extracted using an RGB vector per pixel and deep learning, and a reference frame is selected using object feature information.

Effects of Immersive Virtual Reality English Conversations on Language Anxiety and Learning Achievement (몰입형 가상현실 영어 회화 학습이 언어불안감과 학습 성취도에 미치는 영향)

  • Jeong, Ji-Yeon;Jeong, Heisawn
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.321-332
    • /
    • 2021
  • This study developed an English conversation learning program using virtual reality(VR) and mobile devices. Participants learned and practiced English conversational patterns in immersive virtual reality and mobile conditions. In the program, participants learned and practiced nine conversational patterns with virtual characters in four steps. Language anxiety and conversational fluency were measured to examine the effects of this program. Language anxiety questionnaire was administered before and after the experiment. The results showed that language anxiety was significantly reduced after learning in both conditions, and the reduction waa significantly greater in the immersive condition. Conversational fluency was assessed based on the changes in the length, appropriateness, and accuracy of the responses before and after participants learned and practiced conversational episodes. The results showed that the length, appropriateness, and accuracy of the responses were improved in both conditions after learning. The response length was significantly longer in the immersive VR conditions. These results suggest that immersive VR can be an effective tool to enhance English conversational abilities.

A Study on the Exploration of English Learning Design Elements Applying Immersive Virtual Reality (몰입형 가상현실을 적용한 영어학습 설계요소 탐색에 관한 연구)

  • Choi, Dong-Yeon
    • Journal of the Korea Convergence Society
    • /
    • v.13 no.5
    • /
    • pp.209-217
    • /
    • 2022
  • Virtual reality entered a new phase with the introduction of wearable devices represented by Oculus. This study proposes an immersive virtual reality based on Oculus Rift that enables direct immersion in virtual reality from a first-person perspective for English learning. At this point, it is meaningful to provide considerations when designing instruction for language learning through a comprehensive and integrated review of immersive virtual reality. Therefore, the main purpose of this study is to find a way to maximize the application of language education in the immersive virtual world. And for the design elements for English learning through immersive virtual reality, the setting of learning theory, consideration of differences in learning, selection of learning tasks, regulation of teachers' influence, determination of types of applied senses, flexibility of design, and usage environment were suggested. By integrating the results, various discussions and directions for instructional design were presented.

Towards Group-based Adaptive Streaming for MPEG Immersive Video (MPEG Immersive Video를 위한 그룹 기반 적응적 스트리밍)

  • Jong-Beom Jeong;Soonbin Lee;Jaeyeol Choi;Gwangsoon Lee;Sangwoon Kwak;Won-Sik Cheong;Bongho Lee;Eun-Seok Ryu
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.194-212
    • /
    • 2023
  • The MPEG immersive video (MIV) coding standard achieved high compression efficiency by removing inter-view redundancy and merging the residuals of immersive video which consists of multiple texture (color) and geometry (depth) pairs. Grouping of views that represent similar spaces enables quality improvement and implementation of selective streaming, but this has not been actively discussed recently. This paper introduces an implementation of group-based encoding into the recent version of MIV reference software, provides experimental results on optimal views and videos per group, and proposes a decision method for optimal number of videos for global immersive video representation by using portion of residual videos.

Factors Affecting Individual Effectiveness in Metaverse Workplaces and Moderating Effect of Metaverse Platforms: A Modified ESP Theory Perspective (메타버스 작업공간의 개인적 효과에 영향 및 메타버스 플랫폼의 조절효과에 대한 연구: 수정된 ESP 이론 관점으로)

  • Jooyeon Jeong;Ohbyung Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.207-228
    • /
    • 2023
  • After COVID-19, organizations have widely adopted platforms such as zoom or developed their proprietary online real-time systems for remote work, with recent forays into incorporating the metaverse for meetings and publicity. While ongoing studies investigate the impact of avatar customization, expansive virtual environments, and past virtual experiences on participant satisfaction within virtual reality or metaverse settings, the utilization of the metaverse as a dedicated workspace is still an evolving area. There exists a notable gap in research concerning the factors influencing the performance of the metaverse as a workspace, particularly in non-immersive work-type metaverses. Unlike studies focusing on immersive virtual reality or metaverses emphasizing immersion and presence, the majority of contemporary work-oriented metaverses tend to be non-immersive. As such, understanding the factors that contribute to the success of these existing non-immersive metaverses becomes crucial. Hence, this paper aims to empirically analyze the factors impacting personal outcomes in the non-immersive metaverse workspace and derive implications from the results. To achieve this, the study adopts the Embodied Social Presence (ESP) model as a theoretical foundation, modifying and proposing a research model tailored to the non-immersive metaverse workspace. The findings validate that the impact of presence on task engagement and task involvement exhibits a moderating effect based on the metaverse platform used. Following interviews with participants engaged in non-immersive metaverse workplaces (specifically Gather Town and Ifland), a survey was conducted to gather comprehensive insights.

A Feature Point Extraction and Identification Technique for Immersive Contents Using Deep Learning (딥 러닝을 이용한 실감형 콘텐츠 특징점 추출 및 식별 방법)

  • Park, Byeongchan;Jang, Seyoung;Yoo, Injae;Lee, Jaechung;Kim, Seok-Yoon;Kim, Youngmo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.529-535
    • /
    • 2020
  • As the main technology of the 4th industrial revolution, immersive 360-degree video contents are drawing attention. The market size of immersive 360-degree video contents worldwide is projected to increase from $6.7 billion in 2018 to approximately $70 billion in 2020. However, most of the immersive 360-degree video contents are distributed through illegal distribution networks such as Webhard and Torrent, and the damage caused by illegal reproduction is increasing. Existing 2D video industry uses copyright filtering technology to prevent such illegal distribution. The technical difficulties dealing with immersive 360-degree videos arise in that they require ultra-high quality pictures and have the characteristics containing images captured by two or more cameras merged in one image, which results in the creation of distortion regions. There are also technical limitations such as an increase in the amount of feature point data due to the ultra-high definition and the processing speed requirement. These consideration makes it difficult to use the same 2D filtering technology for 360-degree videos. To solve this problem, this paper suggests a feature point extraction and identification technique that select object identification areas excluding regions with severe distortion, recognize objects using deep learning technology in the identification areas, extract feature points using the identified object information. Compared with the previously proposed method of extracting feature points using stitching area for immersive contents, the proposed technique shows excellent performance gain.

A study on the Interaction of Immersive Contents Focusing on the National Museum of Korea Immersive Digital Gallery and Arte Museum Jeju (실감콘텐츠의 인터랙션 연구 -국립중앙박물관 디지털실감영상관과 아르떼뮤지엄제주를 중심으로-)

  • Ahn, Hyeryung;Kim, Kenneth Chi Ho
    • Journal of Digital Convergence
    • /
    • v.20 no.4
    • /
    • pp.575-584
    • /
    • 2022
  • The purpose of this study is to derive interaction types that appear variously in immersive contents through John Dewey's empirical theory, and to explore how the derived types are delivered in real cases. For this reason, two cases of the National Museum of Korea 'Immersive Digital Gallery' and 'Arte Museum Jeju' were analyzed through interaction types derived from the empirical theory. The interaction types derived based on the experience theory are the elements of 'multi-sensory', 'simultaneous experience' and 'sensory expansion'. In both cases, these types appear connected rather than grafted one by one in common. In one direction, 'multi-sensory' leads to 'sensory expansion', and in two directions, 'simultaneous experience' leads to 'sensory expansion'. As such, the core types of communication between technology and humans are not delivered one by one, but a cycle of interaction is formed in multiple ways. Therefore, it can be seen that the interaction type of immersive contents is expanded step by step by fusion of various senses and experiences in various fields, rather than a 1:1 partial delivery method. Based on this, it will be necessary to study how types are expanded and how viewers are affected when interaction is implemented in immersive contents in the future.

Visual Object Tracking Fusing CNN and Color Histogram based Tracker and Depth Estimation for Automatic Immersive Audio Mixing

  • Park, Sung-Jun;Islam, Md. Mahbubul;Baek, Joong-Hwan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1121-1141
    • /
    • 2020
  • We propose a robust visual object tracking algorithm fusing a convolutional neural network tracker trained offline from a large number of video repositories and a color histogram based tracker to track objects for mixing immersive audio. Our algorithm addresses the problem of occlusion and large movements of the CNN based GOTURN generic object tracker. The key idea is the offline training of a binary classifier with the color histogram similarity values estimated via both trackers used in this method to opt appropriate tracker for target tracking and update both trackers with the predicted bounding box position of the target to continue tracking. Furthermore, a histogram similarity constraint is applied before updating the trackers to maximize the tracking accuracy. Finally, we compute the depth(z) of the target object by one of the prominent unsupervised monocular depth estimation algorithms to ensure the necessary 3D position of the tracked object to mix the immersive audio into that object. Our proposed algorithm demonstrates about 2% improved accuracy over the outperforming GOTURN algorithm in the existing VOT2014 tracking benchmark. Additionally, our tracker also works well to track multiple objects utilizing the concept of single object tracker but no demonstrations on any MOT benchmark.