• Title/Summary/Keyword: Head Mounted Display

Search Result 253, Processing Time 0.031 seconds

A Study on AR-based Interface Technique for efficient UAV Operation using a See-through HMD (투시형 HMD를 이용한 효율적인 UAV 운용을 위한 증강현실 기반의 인터페이스 기법에 대한 연구)

  • Wan Joo Cho;Hyun Joon Chang;Yong Ho Moon
    • Journal of Aerospace System Engineering
    • /
    • v.17 no.6
    • /
    • pp.9-15
    • /
    • 2023
  • In order to effectively prevent and respond to disasters, several techniques have been developed in which the pilot wearing a see-through Head Mounted Display (HMD) performs disaster-related rescue activities using images transmitted from an Unmanned Aerial Vehicle (UAV). However, these techniques have limitations in quickly determining and executing tasks appropriate to the on-site situation because the pilot cannot recognize the entire field in an integrated manner. In order to overcome these problems, we propose an AR based-interface technique that allows the rescuer wearing a see-through HMD to operate a UAV efficiently. Simulation results show that the proposed interface technique allows the rescuer wearing a see-through HMD to control the gimbal and flight of the UAV at a high speed based on finger gestures in a visibility situation.

A Study on effective directive technique of 3D animation in Virtual Reality -Focus on Interactive short using 3D Animation making of Unreal Engine- (가상현실에서 효과적인 3차원 영상 연출을 위한 연구 -언리얼 엔진의 영상 제작을 이용한 인터렉티브 쇼트 중심으로-)

  • Lee, Jun-soo
    • Cartoon and Animation Studies
    • /
    • s.47
    • /
    • pp.1-29
    • /
    • 2017
  • 360-degree virtual reality has been a technology that has been available for a long time and has been actively promoted worldwide in recent years due to development of devices such as HMD (Head Mounted Display) and development of hardware for controlling and executing images of virtual reality. The production of the 360 degree VR requires a different mode of production than the traditional video production, and the matters to be considered for the user have begun to appear. Since the virtual reality image is aimed at a platform that requires enthusiasm, presence and interaction, it is necessary to have a suitable cinematography. In VR, users can freely enjoy the world created by the director and have the advantage of being able to concentrate on his interests during playing the image. However, the director had to develope and install the device what the observer could concentrate on the narrative progression and images to be delivered. Among the various methods of transmitting images, the director can use the composition of the short. In this paper, we will study how to effectively apply the technique of directing through the composition of this shot to 360 degrees virtual reality. Currently, there are no killer contents that are still dominant in the world, including inside and outside the country. In this situation, the potential of virtual reality is recognized and various images are produced. So the way of production follows the traditional image production method, and the shot composition is the same. However, in the 360 degree virtual reality, the use of the long take or blocking technique of the conventional third person view point is used as the main production configuration, and the limit of the short configuration is felt. In addition, while the viewer can interactively view the 360-degree screen using the HMD tracking, the configuration of the shot and the connection of the shot are absolutely dependent on the director like the existing cinematography. In this study, I tried to study whether the viewer can freely change the cinematography such as the composition of the shot at a user's desired time using the feature of interaction of the VR image. To do this, 3D animation was created using a game tool called Unreal Engine to construct an interactive image. Using visual scripting of Unreal Engine called blueprint, we create a device that distinguishes the true and false condition of a condition with a trigger node, which makes a variety of shorts. Through this, various direction techniques are developed and related research is expected, and it is expected to help the development of 360 degree VR image.

Effect of Field of View on Egocentric Distance Perception in Real and Virtual Environment (현실과 가상현실에서 시야각이 자기중심적 거리지각에 미치는 영향)

  • Jin, Seungjae;Kim, Shinwoo;Li, Hyung-Chul O.
    • Science of Emotion and Sensibility
    • /
    • v.24 no.4
    • /
    • pp.17-28
    • /
    • 2021
  • The purpose of the research was to examine the effect of field of view on egocentric distance perception in the real and virtual environment. The replica that mimicked the real environment condition was used to create the virtual environment condition. We manipulated field of view levels equally in both viewing conditions using glasses that limit the field of view in real-world conditions and limiting the field of view in virtual-world conditions in a manner equivalent to real-world conditions via HMD. Eighteen participants observed the target with a limited field of view in a real and virtual environment without head movement. Then, we measured perceived distance using the timed imagined walking method, which measures the time taken by each participant to mentally walk to the target. The target was shown three times at three different distances from the participants: 3, 4, and 5 m. For the analysis, we converted time estimates into distance estimates. Consequently, the estimated distance in the virtual environment condition was less than the estimated distance in the real environment condition. And as the field of view shrank, the estimated distance also decreased. The estimated distance did not vary with field of view levels in real-world conditions. In the virtual environment, the estimated distance decreased as the field of view decreased, whereas in the real environment, the estimated distance increased. The implications of the results and some future research directions are discussed below.

3D Stereoscopic Augmented Reality with a Monocular Camera (단안카메라 기반 삼차원 입체영상 증강현실)

  • Rho, Seungmin;Lee, Jinwoo;Hwang, Jae-In;Kim, Junho
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.11-20
    • /
    • 2016
  • This paper introduces an effective method for generating 3D stereoscopic images that gives immersive 3D experiences to viewers using mobile-based binocular HMDs. Most of previous AR systems with monocular cameras have a common limitation that the same real-world images are provided to the viewer's eyes without parallax. In this paper, based on the assumption that viewers focus on the marker in the scenario of marker based AR, we recovery the binocular disparity about a camera image and a virtual object using the pose information of the marker. The basic idea is to generate the binocular disparity for real-world images and a virtual object, where the images are placed on the 2D plane in 3D defined by the pose information of the marker. For non-marker areas in the images, we apply blur effects to reduce the visual discomfort by decreasing their sharpness. Our user studies show that the proposed method for 3D stereoscopic image provides high depth feeling to viewers compared to the previous binocular AR systems. The results show that our system provides high depth feelings, high sense of reality, and visual comfort, compared to the previous binocular AR systems.

Augmented Reality Based Tangible Interface For Digital Lighting of CAID System (CAID 시스템의 디지털 라이팅을 위한 증강 현실 기반의 실체적 인터페이스에 관한 연구)

  • Hwang, Jung-Ah;Nam, Tek-Jin
    • Archives of design research
    • /
    • v.20 no.3 s.71
    • /
    • pp.119-128
    • /
    • 2007
  • With the development of digital technologies, CAID became an essential part in the industrial design process. Creating photo-realistic images from a virtual scene with 3D models is one of the specialized task for CAID users. This task requires a complex interface of setting the positions and the parameters of camera and lights for optimal rendering results. However, the user interface of existing CAID tools are not simple for designers because the task is mostly accomplished in a parameter setting dialogue window. This research address this interface issues, in particular the issues related to lighting, by developing and evaluating TLS(Tangible Lighting Studio) that uses Augmented Reality and Tangible User Interface. The interface of positioning objects and setting parameters become tangible and distributed in the workspace to support more intuitive rendering task. TLS consists of markers, and physical controller, and a see-through HMD(Head Mounted Display). The user can directly control the lighting parameters in the AR workspace. In the evaluation experiment, TLS provide higher effectiveness, efficiency and user satisfaction compared to existing GUI(Graphic User Interface) method. It is expected that the application of TLS can be expanded to photography education and architecture simulation.

  • PDF

Availability of Mobile Art in Smartphone Environment of Augmented Reality Content Industrial Technology (증강현실 콘텐츠 산업기술의 스마트폰 환경 모바일 아트 활용 가능성)

  • Kim, Hee-Young;Shin, Chang-Ok
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.5
    • /
    • pp.48-57
    • /
    • 2013
  • Smartphones provide users with environment for communication and sharing information and at the same time play an important role of mobile technology and mobile art development. Smartphone technology-related researches are being accelerated especially with the advent of mobile Augmented Reality(AR) age, but the studies on user participation that is essential for AR content industry were insufficient. In that regard, the assistance from mobile art area that has already developed these characteristics is essential. Thus, this article is to classify mobile art that has not been studied a lot domestically into feature phone usage and smartphone usage and to analyze each example case with the three most used methods. The usage of feature phones which use the sound and images of mobile devices can be divided into three: installation and performing methods, single channel video art method and five senses communication method. On the other hand, the usage of smartphones that use sensors, cameras, GPS and AR can be divided into location-based AR, marker-based AR and markerless AR. Also, as a result of examining mobile AR content utilization technology by industries, combined methods are utilized; tourism and game-related industries use location-based AR, education and medicine-related industries use marker-based AR, and shopping-related industries use markerless AR. The development of AR content industry is expected to be accelerated with mobile art that makes use of combined technology method and constant communication method through active participation of users. The future development direction of mobile AR industry is predicted to have minimized HMD, integration of hologram technology and artificial intelligence and make the most of big data and social network so that we could overcome the technological limitation of AR.

A Study on Core Factors and Application of Asymmetric VR Content (Asymmetric VR 콘텐츠 제작의 핵심 요인과 활용에 관한 연구)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.39-49
    • /
    • 2017
  • In this study, we propose the core factors and application of asymmetric virtual reality(VR) content in which head-mounted display(HMD) user and Non-HMD users can work together in a co-located space that can lead to various experiences and high presence. The core of the proposed asymmetric VR content is that all users are immersed in VR and participate in new experiences by reflecting widely a range of users' participation and environments, regardless of whether or not users wear the HMD. For this purpose, this study defines the role relationships between HMD user and Non-HMD users, the viewpoints provided to users, and the speech communication structure available among users. Based on this, we verified the core factors through the process of producing assistive asymmetric VR content and cooperative asymmetric VR content directly. Finally, we conducted a survey to examine the users' presence and their experience of the proposed asymmetric VR content and to analyze the application method. As a result, it was confirmed that if the purpose of asymmetric VR content and core factors between the two types of users are clearly distinguished and defined, the independent experience presented by the VR content together with perceived presence can provide a satisfactory experience to all users.

Production Technology for Multi-face Convergence Performance (Multi-face Convergence 공연을 위한 제작 기술)

  • You, Mi;Son, Tae-Woong;Kim, Sang-Il
    • Journal of Broadcast Engineering
    • /
    • v.25 no.4
    • /
    • pp.475-486
    • /
    • 2020
  • This paper is a thesis on media art technology for high-tech performances and exhibitions. After creating an interactive stroke in VR, it is projected in real time through a media facade technique. Among our traditional dramas emphasizing linear movements, movements were extracted from the Bongsan mask dance, and the movements of the lines were used in a media art performance called 'Multi-face Convergence'. When motion data enters the virtual space, geometry consisting of faces is created in the VR space. The created strokes can be set with various brush types, and when performing, a stroke with a red fire effect that matches a dynamic movement was used. It was made to be able to harmonize with the dancers performing the Bongsan mask dance. The medium called VR has characteristics that are not suitable for melting into a performance, but in this performance, it has overcome its limitations by using a technique called media façade. We propose the world's first performance technique that combines interactive strokes with traditional dance performances.

The Effects of Emotional Interaction with Virtual Student on the User's Eye-fixation and Virtual Presence in the Teaching Simulation (가상현실 수업시뮬레이션에서 가상학생과의 정서적 상호작용이 사용자의 시선응시 및 가상실재감에 미치는 영향)

  • Ryu, Jeeheon;Kim, Kukhyeon
    • The Journal of the Korea Contents Association
    • /
    • v.20 no.2
    • /
    • pp.581-593
    • /
    • 2020
  • The purpose of this study was to examine the eye-fixation times on different parts of a student avatar and the virtual presence with two scenarios in the virtual reality-based teaching simulation. This study was to identify user attention while he or she is interacting with a student avatar. By examining where a user is gazing during a conversation with the avatar, we have a better understanding of non-verbal communication. For this study, forty-five college students (21 females and 24 males) participated in the experiment. They had a conversation with a student avatar in a virtual reality-based teaching simulation. The participants had verbal interactions with the student avatar with two scenarios. While they were having a conversation with the virtual character in the teaching simulation, their eye-movements were collected through a head-mounted display with an eye-tracking function embedded. The results revealed that there were significant differences in eye-fixation times. Participants gazed a longer time on facial expression than any other area. The fixation time on the facial expression was more prolonged than on gestures (F=3.75, p<.05). However, the virtual presence was not significantly different in two scenario levels. This result suggested that users focus on the face more than the gesture when they emotionally interact with the virtual character.

Predicting Sensitivity of Motion Sickness using by Pattern of Cardinal Gaze Position (기본 주시눈 위치의 패턴을 이용한 영상멀미의 민감도 예측)

  • Park, Sangin;Lee, Dong Won;Mun, Sungchul;Whang, Mincheol
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.11
    • /
    • pp.227-235
    • /
    • 2018
  • The aim of this study is to predict the sensitivity of motion sickness (MS) using pattern of cardinal gaze position (CGP) before experiencing the virtual reality (VR) content. Twenty volunteers of both genders (8 females, mean age $28.42{\pm}3.17$) participated in this experiment. They was required to measure the pattern of CGP for 5 minute, and then watched VR content for 15 minute. After watching VR content, subjective experience for MS reported from participants using by 'Simulator Sickness Questionnaire (SSQ)'. Statistical significance between CGP and SSQ score were confirmed using Pearson correlation analysis and independent t-test, and prediction model was extracted from multiple regression model. PCPA & PCPR indicators from CGP revealed significantly difference and strong or moderate positive correlation with SSQ score. Extracted prediction model was tested using correlation coefficient and mean error, SSQ score between subjective rating and prediction model showed strong positive correlation and low difference.