• 제목/요약/키워드: immersive learning

Search Result 108, Processing Time 0.025 seconds

Brain Correlates of Emotion for XR Auditory Content (XR 음향 콘텐츠 활용을 위한 감성-뇌연결성 분석 연구)

  • Park, Sangin;Kim, Jonghwa;Park, Soon Yong;Mun, Sungchul
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.738-750
    • /
    • 2022
  • In this study, we reviewed and discussed whether auditory stimuli with short length can evoke emotion-related neurological responses. The findings implicate that if personalized sound tracks are provided to XR users based on machine learning or probability network models, user experiences in XR environment can be enhanced. We also investigated that the arousal-relaxed factor evoked by short auditory sound can make distinct patterns in functional connectivity characterized from background EEG signals. We found that coherence in the right hemisphere increases in sound-evoked arousal state, and vice versa in relaxed state. Our findings can be practically utilized in developing XR sound bio-feedback system which can provide preference sound to users for highly immersive XR experiences.

Factors Influencing the Intention to Participate in Digital Cultural Tourism on the Metaverse Platform (메타버스 플랫폼에서의 문화관광 활동 참여 의도에 영향을 미치는 요인에 관한 연구)

  • Jiaping Zang;Eunjin Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.341-359
    • /
    • 2023
  • The metaverse applies various technological means such as digital twin modeling, 3D rendering, and holographic imaging, which can provide an immersive tourism service experience. However, since the development of the metaverse is still in its infancy, there is relatively little research on digital tourism from the perspective of the metaverse. This research empirically studies the factors that promote the participation behavior of users on the metaverse platform for digital cultural tourism. Our results show that users' internal motivations for learning and entertainment and the functions provided by metaverse, which are sensory stimulation and social interaction lead to the intention to participate in cultural tourism on metaverse with the mediating effects of immersion experience and perceived pleasure.

Interaction Analysis Between Visitors and Gesture-based Exhibits in Science Centers from Embodied Cognition Perspectives (체화된 인지의 관점에서 과학관 제스처 기반 전시물의 관람객 상호작용 분석)

  • So, Hyo-Jeong;Lee, Ji Hyang;Oh, Seung Ja
    • Korea Science and Art Forum
    • /
    • v.25
    • /
    • pp.227-240
    • /
    • 2016
  • This study aims to examine how visitors in science centers interact with gesture-based exhibits from embodied cognition perspectives. Four gesture-based exhibits in two science centers were selected for this study. In addition, we interviewed a total of 14 visitor groups to examine how they perceived the property of gesture-based exhibit. We also interviewed four experts to further examine the benefits and limitations of the current gesture-based exhibits in science centers. The research results indicate that the total amount of interaction time between visitors and gesture-based exhibits was not high overall, implying that there was little of visitors' immersive engagement. Both experts and visitors expressed that the current gesture-based exhibits tend to highlight the novelty effect but little obvious impacts linking gestures and learning. Drawing from the key findings, this study suggests the following design considerations for gesture-based exhibits. First, to increate visitor's initial engagement, the purpose and usability of gesture-based exhibits should be considered from the initial phase of design. Second, to promote meaningful interaction, it is important to sustain visitors' initial engagement. For that, gesture-based exhibits should be transformed to promote intellectual curiosity beyond simple interaction. Third, from embodied cognition perspectives, exhibits design should reflect how the mappings between specific gestures and metaphors affect learning processes. Lastly, this study suggests that future gesture-based exhibits should be designed toward promoting interaction among visitors and adaptive inquiry.

Synthetic Data Generation with Unity 3D and Unreal Engine for Construction Hazard Scenarios: A Comparative Analysis

  • Aqsa Sabir;Rahat Hussain;Akeem Pedro;Mehrtash Soltani;Dongmin Lee;Chansik Park;Jae- Ho Pyeon
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1286-1288
    • /
    • 2024
  • The construction industry, known for its inherent risks and multiple hazards, necessitates effective solutions for hazard identification and mitigation [1]. To address this need, the implementation of machine learning models specializing in object detection has become increasingly important because this technological approach plays a crucial role in augmenting worker safety by proactively recognizing potential dangers on construction sites [2], [3]. However, the challenge in training these models lies in obtaining accurately labeled datasets, as conventional methods require labor-intensive labeling or costly measurements [4]. To circumvent these challenges, synthetic data generation (SDG) has emerged as a key method for creating realistic and diverse training scenarios [5], [6]. The paper reviews the evolution of synthetic data generation tools, highlighting the shift from earlier solutions like Synthpop and Data Synthesizer to advanced game engines[7]. Among the various gaming platforms, Unity 3D and Unreal Engine stand out due to their advanced capabilities in replicating realistic construction hazard environments [8], [9]. Comparing Unity 3D and Unreal Engine is crucial for evaluating their effectiveness in SDG, aiding developers in selecting the appropriate platform for their needs. For this purpose, this paper conducts a comparative analysis of both engines assessing their ability to create high-fidelity interactive environments. To thoroughly evaluate the suitability of these engines for generating synthetic data in construction site simulations, the focus relies on graphical realism, developer-friendliness, and user interaction capabilities. This evaluation considers these key aspects as they are essential for replicating realistic construction sites, ensuring both high visual fidelity and ease of use for developers. Firstly, graphical realism is crucial for training ML models to recognize the nuanced nature of construction environments. In this aspect, Unreal Engine stands out with its superior graphics quality compared to Unity 3D which typically considered to have less graphical prowess [10]. Secondly, developer-friendliness is vital for those generating synthetic data. Research indicates that Unity 3D is praised for its user-friendly interface and the use of C# scripting, which is widely used in educational settings, making it a popular choice for those new to game development or synthetic data generation. Whereas Unreal Engine, while offering powerful capabilities in terms of realistic graphics, is often viewed as more complex due to its use of C++ scripting and the blueprint system. While the blueprint system is a visual scripting tool that does not require traditional coding, it can be intricate and may present a steeper learning curve, especially for those without prior experience in game development [11]. Lastly, regarding user interaction capabilities, Unity 3D is known for its intuitive interface and versatility, particularly in VR/AR development for various skill levels. In contrast, Unreal Engine, with its advanced graphics and blueprint scripting, is better suited for creating high-end, immersive experiences [12]. Based on current insights, this comparative analysis underscores the user-friendly interface and adaptability of Unity 3D, featuring a built-in perception package that facilitates automatic labeling for SDG [13]. This functionality enhances accessibility and simplifies the SDG process for users. Conversely, Unreal Engine is distinguished by its advanced graphics and realistic rendering capabilities. It offers plugins like EasySynth (which does not provide automatic labeling) and NDDS for SDG [14], [15]. The development complexity associated with Unreal Engine presents challenges for novice users, whereas the more approachable platform of Unity 3D is advantageous for beginners. This research provides an in-depth review of the latest advancements in SDG, shedding light on potential future research and development directions. The study concludes that the integration of such game engines in ML model training markedly enhances hazard recognition and decision-making skills among construction professionals, thereby significantly advancing data acquisition for machine learning in construction safety monitoring.

A case study on the importance of non-intrusiveness of mobile devices in an interactive museum environment (인터랙티브 전시환경에서 모바일 디바이스의 비간섭적 특성의 중요성에 대한 사례 연구)

  • Rhee, Boa
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.1
    • /
    • pp.31-42
    • /
    • 2013
  • This research sheds light on the non-intrusive traits of mobile devices (Electronic Guidebook, Rememberer, I-Guides and eXspot) deployed in Exploratorium for enhancing visitor experience via case studies. In an interactive exhibition environment, non-intrusiveness was the key to supporting the immersive experience and meaning-making for visitors. The usability of hand-held devices directly impacted on the non-intrusiveness, thereby reshaping the form-factors of mobile devices. The change in from-factor has also minimized the functions of devices as the remember of museum experience. Furthermore, the role of mobile devices, which turned from a supposed multi-media guide to a mere rememberer, made them virtually impossible for realizing the "seamless visiting model" originally planned. An array of projects carried out in Exploration have achieved some degree of success such as increasing viewing time as well as reinforcing post-visit activities. However, taken from musicological perspective, increase in viewing time is by all means insufficient to be taken as proof since it is assumed to be achieved by photo-taking (i.e. MyExploratorium) rather than by interacting between visitors and exhibits. This issue --increased viewing time -- needs to be analyzed in depth. All in all, mobile devices used in Exploratorium can be defined as a learning tool/educational supporting medium based on personalization for (visitors') optimizing extended museum experience.

Video classifier with adaptive blur network to determine horizontally extrapolatable video content (적응형 블러 기반 비디오의 수평적 확장 여부 판별 네트워크)

  • Minsun Kim;Changwook Seo;Hyun Ho Yun;Junyong Noh
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.99-107
    • /
    • 2024
  • While the demand for extrapolating video content horizontally or vertically is increasing, even the most advanced techniques cannot successfully extrapolate all videos. Therefore, it is important to determine if a given video can be well extrapolated before attempting the actual extrapolation. This can help avoid wasting computing resources. This paper proposes a video classifier that can identify if a video is suitable for horizontal extrapolation. The classifier utilizes optical flow and an adaptive Gaussian blur network, which can be applied to flow-based video extrapolation methods. The labeling for training was rigorously conducted through user tests and quantitative evaluations. As a result of learning from this labeled dataset, a network was developed to determine the extrapolation capability of a given video. The proposed classifier achieved much more accurate classification performance than methods that simply use the original video or fixed blur alone by effectively capturing the characteristics of the video through optical flow and adaptive Gaussian blur network. This classifier can be utilized in various fields in conjunction with automatic video extrapolation techniques for immersive viewing experiences.

Enhancing Leadership Skills of Construction Students Through Conversational AI-Based Virtual Platform

  • Rahat HUSSAIN;Akeem PEDRO;Mehrtash SOLTANI;Si Van Tien TRAN;Syed Farhan Alam ZAIDI;Chansik PARK;Doyeop LEE
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1326-1327
    • /
    • 2024
  • The construction industry is renowned for its dynamic and intricate characteristics, which demand proficient leadership skills for successful project management. However, the existing training platforms within this sector often overlook the significance of soft skills in leadership development. These platforms primarily focus on safety, work processes, and technical modules, leaving a noticeable gap in preparing future leaders, especially students in the construction domain, for the complex challenges they will encounter in their professional careers. It is crucial to recognize that effective leadership in construction projects requires not only technical expertise but also the ability to communicate effectively, collaborate with diverse stakeholders, and navigate complex relationships. These soft skills are critical for managing teams, resolving conflicts, and driving successful project outcomes. In addition, the construction sector has been slow in adopting and harnessing the potential of advanced emerging technologies such as virtual reality, artificial intelligence, to enhance the soft skills of future leaders. Therefore, there is a need for a platform where students can practice complex situations and conversations in a safe and repeatable training environment. To address these challenges, this study proposes a pioneering approach by integrating conversational AI techniques using large language models (LLMs) within virtual worlds. Although LLMs like ChatGPT possess extensive knowledge across various domains, their responses may lack relevance in specific contexts. Prompt engineering techniques are utilized to ensure more accurate and effective responses, tailored to the specific requirements of the targeted users. This involves designing and refining the input prompts given to the language model to guide its response generation. By carefully crafting the prompts and providing context-specific instructions, the model can generate responses that are more relevant and aligned with the desired outcomes of the training program. The proposed system offers interactive engagement to students by simulating diverse construction site roles through conversational AI based agents. Students can face realistic challenges that test and enhance their soft skills in a practical context. They can engage in conversations with AI-based avatars representing different construction site roles, such as machine operators, laborers, and site managers. These avatars are equipped with AI capabilities to respond dynamically to user interactions, allowing students to practice their communication and negotiation skills in realistic scenarios. Additionally, the introduction of AI instructors can provide guidance, feedback, and coaching tailored to the individual needs of each student, enhancing the effectiveness of the training program. The AI instructors can provide immediate feedback and guidance, helping students improve their decision-making and problem-solving abilities. The proposed immersive learning environment is expected to significantly enhance leadership competencies of students, such as communication, decision-making and conflict resolution in the practical context. This study highlights the benefits of utilizing conversational AI in educational settings to prepare construction students for real-world leadership roles. By providing hands-on, practical experience in dealing with site-specific challenges, students can develop the necessary skills and confidence to excel in their future roles.

A Development of a Mixed-Reality (MR) Education and Training System based on user Environment for Job Training for Radiation Workers in the Nondestructive Industry (비파괴산업 분야 방사선작업종사자 직장교육을 위한 사용자 환경 기반 혼합현실(MR) 교육훈련 시스템 개발)

  • Park, Hyong-Hu;Shim, Jae-Goo;Park, Jeong-kyu;Son, Jeong-Bong;Kwon, Soon-Mu
    • Journal of the Korean Society of Radiology
    • /
    • v.15 no.1
    • /
    • pp.45-54
    • /
    • 2021
  • This study was written to create educational content in non-destructive fields based on Mixed Reality. Currently, in the field of radiation, there is almost no content for educational Mixed Reality-based educational content. And in the field of non-destructive inspection, the working environment is poor, the number of employees is often 10 or less for each manufacturer, and the educational infrastructure is not built. There is no practical training, only practical training and safety education to convey information. To solve this, it was decided to develop non-destructive worker education content based on Mixed Reality. This content was developed based on Microsoft's HoloLens 2 HMD device. It is manufactured based on the resolution of 1280 ⁎ 720, and the resolution is different for each device, and the Side is created by aligning the Left, Right, Bottom, and TOP positions of Anchor, and the large image affects the size of Atlas. The large volume like the wallpaper and the upper part was made by replacing it with UITexture. For UI Widget Wizard, I made Label, Buttom, ScrollView, and Sprite. In this study, it is possible to provide workers with realistic educational content, enable self-directed education, and educate with 3D stereoscopic images based on reality to provide interesting and immersive education. Through the images provided in Mixed Reality, the learner can directly operate things through the interaction between the real world and the Virtual Reality, and the learner's learning efficiency can be improved. In addition, mixed reality education can play a major role in non-face-to-face learning content in the corona era, where time and place are not disturbed.