• Title/Summary/Keyword: Immersive User Interface

Search Result 54, Processing Time 0.022 seconds

Brain Correlates of Emotion for XR Auditory Content (XR 음향 콘텐츠 활용을 위한 감성-뇌연결성 분석 연구)

  • Park, Sangin;Kim, Jonghwa;Park, Soon Yong;Mun, Sungchul
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.738-750
    • /
    • 2022
  • In this study, we reviewed and discussed whether auditory stimuli with short length can evoke emotion-related neurological responses. The findings implicate that if personalized sound tracks are provided to XR users based on machine learning or probability network models, user experiences in XR environment can be enhanced. We also investigated that the arousal-relaxed factor evoked by short auditory sound can make distinct patterns in functional connectivity characterized from background EEG signals. We found that coherence in the right hemisphere increases in sound-evoked arousal state, and vice versa in relaxed state. Our findings can be practically utilized in developing XR sound bio-feedback system which can provide preference sound to users for highly immersive XR experiences.

Synthetic Data Generation with Unity 3D and Unreal Engine for Construction Hazard Scenarios: A Comparative Analysis

  • Aqsa Sabir;Rahat Hussain;Akeem Pedro;Mehrtash Soltani;Dongmin Lee;Chansik Park;Jae- Ho Pyeon
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1286-1288
    • /
    • 2024
  • The construction industry, known for its inherent risks and multiple hazards, necessitates effective solutions for hazard identification and mitigation [1]. To address this need, the implementation of machine learning models specializing in object detection has become increasingly important because this technological approach plays a crucial role in augmenting worker safety by proactively recognizing potential dangers on construction sites [2], [3]. However, the challenge in training these models lies in obtaining accurately labeled datasets, as conventional methods require labor-intensive labeling or costly measurements [4]. To circumvent these challenges, synthetic data generation (SDG) has emerged as a key method for creating realistic and diverse training scenarios [5], [6]. The paper reviews the evolution of synthetic data generation tools, highlighting the shift from earlier solutions like Synthpop and Data Synthesizer to advanced game engines[7]. Among the various gaming platforms, Unity 3D and Unreal Engine stand out due to their advanced capabilities in replicating realistic construction hazard environments [8], [9]. Comparing Unity 3D and Unreal Engine is crucial for evaluating their effectiveness in SDG, aiding developers in selecting the appropriate platform for their needs. For this purpose, this paper conducts a comparative analysis of both engines assessing their ability to create high-fidelity interactive environments. To thoroughly evaluate the suitability of these engines for generating synthetic data in construction site simulations, the focus relies on graphical realism, developer-friendliness, and user interaction capabilities. This evaluation considers these key aspects as they are essential for replicating realistic construction sites, ensuring both high visual fidelity and ease of use for developers. Firstly, graphical realism is crucial for training ML models to recognize the nuanced nature of construction environments. In this aspect, Unreal Engine stands out with its superior graphics quality compared to Unity 3D which typically considered to have less graphical prowess [10]. Secondly, developer-friendliness is vital for those generating synthetic data. Research indicates that Unity 3D is praised for its user-friendly interface and the use of C# scripting, which is widely used in educational settings, making it a popular choice for those new to game development or synthetic data generation. Whereas Unreal Engine, while offering powerful capabilities in terms of realistic graphics, is often viewed as more complex due to its use of C++ scripting and the blueprint system. While the blueprint system is a visual scripting tool that does not require traditional coding, it can be intricate and may present a steeper learning curve, especially for those without prior experience in game development [11]. Lastly, regarding user interaction capabilities, Unity 3D is known for its intuitive interface and versatility, particularly in VR/AR development for various skill levels. In contrast, Unreal Engine, with its advanced graphics and blueprint scripting, is better suited for creating high-end, immersive experiences [12]. Based on current insights, this comparative analysis underscores the user-friendly interface and adaptability of Unity 3D, featuring a built-in perception package that facilitates automatic labeling for SDG [13]. This functionality enhances accessibility and simplifies the SDG process for users. Conversely, Unreal Engine is distinguished by its advanced graphics and realistic rendering capabilities. It offers plugins like EasySynth (which does not provide automatic labeling) and NDDS for SDG [14], [15]. The development complexity associated with Unreal Engine presents challenges for novice users, whereas the more approachable platform of Unity 3D is advantageous for beginners. This research provides an in-depth review of the latest advancements in SDG, shedding light on potential future research and development directions. The study concludes that the integration of such game engines in ML model training markedly enhances hazard recognition and decision-making skills among construction professionals, thereby significantly advancing data acquisition for machine learning in construction safety monitoring.

Augmented Reality Game Interface Using Hand Gestures Tracking (사용자 손동작 추적에 기반한 증강현실 게임 인터페이스)

  • Yoon, Jong-Hyun;Park, Jong-Seung
    • Journal of Korea Game Society
    • /
    • v.6 no.2
    • /
    • pp.3-12
    • /
    • 2006
  • Recently, Many 3D augmented reality games that provide strengthened immersive have appeared in the 3D game environment. In this article, we describe a barehanded interaction method based on human hand gestures for augmented reality games. First, feature points are extracted from input video streams. Point features are tracked and motion of moving objects are computed. The shape of the motion trajectories are used to determine whether the motion is intended gestures. A long smooth trajectory toward one of virtual objects or menus is classified as an intended gesture and the corresponding action is invoked. To prove the validity of the proposed method, we implemented two simple augmented reality applications: a gesture-based music player and a virtual basketball game. In the music player, several menu icons are displayed on the top of the screen and an user can activate a menu by hand gestures. In the virtual basketball game, a virtual ball is bouncing in a virtual cube space and the real video stream is shown in the background. An user can hit the virtual ball with his hand gestures. From the experiments for three untrained users, it is shown that the accuracy of menu activation according to the intended gestures is 94% for normal speed gestures and 84% for fast and abrupt gestures.

  • PDF

Real-Time Stereoscopic Visualization of Very Large Volume Data on CAVE (CAVE상에서의 방대한 볼륨 데이타의 실시간 입체 영상 가시화)

  • 임무진;이중연;조민수;이상산;임인성
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.679-691
    • /
    • 2002
  • Volume visualization is an important subarea of scientific visualization, and is concerned with techniques that are effectively used in generating meaningful and visual information from abstract and complex volume datasets, defined in three- or higher-dimensional space. It has been increasingly important in various fields including meteorology, medical science, and computational fluid dynamics, and so on. On the other hand, virtual reality is a research field focusing on various techniques that aid gaining experiences in virtual worlds with visual, auditory and tactile senses. In this paper, we have developed a visualization system for CAVE, an immersive 3D virtual environment system, which generates stereoscopic images from huge human volume datasets in real-time using an improved volume visualization technique. In order to complement the 3D texture-mapping based volume rendering methods, that easily slow down as data sizes increase, our system utilizes an image-based rendering technique to guarantee real-time performance. The system has been designed to offer a variety of user interface functionality for effective visualization. In this article, we present detailed description on our real-time stereoscopic visualization system, and show how the Visible Korean Human dataset is effectively visualized on CAVE.