• Title/Summary/Keyword: immersive

Search Result 713, Processing Time 0.036 seconds

Development for Multi-modal Realistic Experience I/O Interaction System (멀티모달 실감 경험 I/O 인터랙션 시스템 개발)

  • Park, Jae-Un;Whang, Min-Cheol;Lee, Jung-Nyun;Heo, Hwan;Jeong, Yong-Mu
    • Science of Emotion and Sensibility
    • /
    • v.14 no.4
    • /
    • pp.627-636
    • /
    • 2011
  • The purpose of this study is to develop the multi-modal interaction system. This system provides realistic and an immersive experience through multi-modal interaction. The system recognizes user behavior, intention, and attention, which overcomes the limitations of uni-modal interaction. The multi-modal interaction system is based upon gesture interaction methods, intuitive gesture interaction and attention evaluation technology. The gesture interaction methods were based on the sensors that were selected to analyze the accuracy of the 3-D gesture recognition technology using meta-analysis. The elements of intuitive gesture interaction were reflected through the results of experiments. The attention evaluation technology was developed by the physiological signal analysis. This system is divided into 3 modules; a motion cognitive system, an eye gaze detecting system, and a bio-reaction sensing system. The first module is the motion cognitive system which uses the accelerator sensor and flexible sensors to recognize hand and finger movements of the user. The second module is an eye gaze detecting system that detects pupil movements and reactions. The final module consists of a bio-reaction sensing system or attention evaluating system which tracks cardiovascular and skin temperature reactions. This study will be used for the development of realistic digital entertainment technology.

  • PDF

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Interactive Projection by Closed-loop based Position Tracking of Projected Area for Portable Projector (이동 프로젝터 투사영역의 폐회로 기반 위치추적에 의한 인터랙티브 투사)

  • Park, Ji-Young;Rhee, Seon-Min;Kim, Myoung-Hee
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.29-38
    • /
    • 2010
  • We propose an interactive projection technique to display details of a large image in a high resolution and brightness by tracking a portable projector. A closed-loop based tracking method is presented to update the projected image while a user changes the position of the detail area by moving the portable projector. A marker is embedded in the large image to indicate the position to be occupied by the detail image projected by the portable projector. The marker is extracted in sequential images acquired by a camera attached to the portable projector. The marker position in the large display image is updated under a constraint that the center positions of marker and camera frame coincide in every camera frame. The image and projective transformation for warping are calculated using the marker position and shape in the camera frame. The marker's four corner points are determined by a four-step segmentation process which consists of camera image preprocessing based on HSI, edge extraction by Hough transformation, quadrangle test, and cross-ratio test. The interactive projection system implemented by the proposed method performs at about 24fps. In the user study, the overall feedback about the system usability was very high.

A Study on Development of Experimental Contents Using 3-channel Multi-Image Playback Technique: Based on transparent OLED and dual layer display system (3채널 멀티 영상 재생 기법과 증강현실을 이용한 체험 콘텐츠 제작에 관한 연구: 투명 OLED 및 듀얼 레이어 디스플레이 시스템 기반)

  • Lee, Sang-Hyun
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.6
    • /
    • pp.151-160
    • /
    • 2017
  • Among the methods of developing tourist spots and culture as the experience contents, it is a common method to display high-quality video images on a large display, and it is necessary to make a special difference between the participant's active participation and the visual experience in other regions. In this paper, using the single molecular OLED and active type, the regional tourist spots blend transparent OLED dual-layer display systems with the extended image implementation and augmented interaction techniques to give the participants a real-world experience, such as directing to new experiences and beautiful sights. In this paper, additional images and UI layers are applied to the layers of the images to allow visitors to experience sightseeing information, weather, maps, accommodations, festivals and photo materials with image. In addition to the dual-layer system, it also added a multi-display system that additionally has one vertical 55-inch display on each side, adding to the experience the immersive experience and interface interlocking fun. By using transparent OLED, dual layer panel and 3-channel Multi-image playback technique, the augmented type experience contents which can experience the local attractions in Jeollanamdo province in Korea at all time without any limitation of time and space were developed.

Augmented Presentation Framework Design and System Implementation for Immersive Information Visualization and Delivery (몰입적 정보 표현과 전달을 위한 증강 프레젠테이션 디자인 및 시스템 구현)

  • Kim, Minju;Wohn, Kwangyun
    • Journal of the HCI Society of Korea
    • /
    • v.12 no.1
    • /
    • pp.5-13
    • /
    • 2017
  • Interactive intervention of the human presenter is one of the important factors that make the visualization more effective. Rather than just showing the content, the presenter enhances the process of the information delivery by providing the context of visualization. In this paper, we define this as an augmented presentation. In augmented presentation concept, the presenter can facilitate presentation more actively by being fully immersed in the visualization space and reaching and interacting into digital information. In order to concrete the concept, we design presentation space that enables the presenter to be seamlessly immersed in the visualization. Also we increase the presenter's roles as a storyteller, controller and augmenter allowing the presenter to fully support communicative process between the audience and the visualization. Then, we present an augmented presentation system to verify the proposed concept. We rendered 3D visualization through a half-mirror film and a wall projection screen that are place in parallel and applied with stereoscopic images, then, spatially align the presenter inside the virtual visualization space. After that, we conduct a controlled experiment to investigate the subjective level of immersion and engagement of the audience to HoloStation compared to traditional presentation system. Our initial investigation suggests that the newly conceived augmented presentation has potential not only to enhance the information presentation but also to supports the delivery of visualization.

MPEG-H 3D Audio Decoder Structure and Complexity Analysis (MPEG-H 3D 오디오 표준 복호화기 구조 및 연산량 분석)

  • Moon, Hyeongi;Park, Young-cheol;Lee, Yong Ju;Whang, Young-soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.2
    • /
    • pp.432-443
    • /
    • 2017
  • The primary goal of the MPEG-H 3D Audio standard is to provide immersive audio environments for high-resolution broadcasting services such as UHDTV. This standard incorporates a wide range of technologies such as encoding/decoding technology for multi-channel/object/scene-based signal, rendering technology for providing 3D audio in various playback environments, and post-processing technology. The reference software decoder of this standard is a structure combining several modules and can operate in various modes. Each module is composed of independent executable files and executed sequentially, real time decoding is impossible. In this paper, we make DLL library of the core decoder, format converter, object renderer, and binaural renderer of the standard and integrate them to enable frame-based decoding. In addition, by measuring the computation complexity of each mode of the MPEG-H 3D-Audio decoder, this paper also provides a reference for selecting the appropriate decoding mode for various hardware platforms. As a result of the computational complexity measurement, the low complexity profiles included in Korean broadcasting standard has a computation complexity of 2.8 times to 12.4 times that of the QMF synthesis operation in case of rendering as a channel signals, and it has a computation complexity of 4.1 times to 15.3 times of the QMF synthesis operation in case of rendering as a binaural signals.

Shadow Removal in Front Projection System using a Depth Camera (깊이 카메라를 이용한 전방 프로젝션 환경에서 그림자 제거)

  • Kim, Jaedong;Seo, Hyunggoog;Cha, Seunghoon;Noh, Junyong
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.1-10
    • /
    • 2015
  • One way to create a visually immersive environment is to utilize a front projection system. Especially, when enough space is not available behind the screen, it becomes difficult to install a back projection system, making the front projection an appropriate choice. A drawback associated with the front projection is, however, the interference of shadow. The shadow can be cast on the screen when the user is located between the screen and the projector. This shadow can negatively affect the user experience and reduce the sense of immersion by removing important information. There have been various attempts to eliminating shadows cast on the screen by using multiple projectors that compensate for each other with missing information. There is trade-off between calculataion time and desired accuracy in this mutual compensation. Accurate estimation of the shadow usually requires heavy computation while simple approaches suffer from inclusion of non-shadow regions in the result. We propose a novel approach to removing shadows created in the front projection system using the skeleton data obtained from a depth camera. The skeleton data helps accurately extract the shape of the shadow that the user cast without requiring much computation. Our method also utilizes a distance field to remove the afterimage of shadow that may occur when the user moves. We verify the effectiveness of our system by performing various experiments in an interactive environment created by a front projection system.

A Study on Core Factors and Application of Asymmetric VR Content (Asymmetric VR 콘텐츠 제작의 핵심 요인과 활용에 관한 연구)

  • Kim, Jinmo
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.5
    • /
    • pp.39-49
    • /
    • 2017
  • In this study, we propose the core factors and application of asymmetric virtual reality(VR) content in which head-mounted display(HMD) user and Non-HMD users can work together in a co-located space that can lead to various experiences and high presence. The core of the proposed asymmetric VR content is that all users are immersed in VR and participate in new experiences by reflecting widely a range of users' participation and environments, regardless of whether or not users wear the HMD. For this purpose, this study defines the role relationships between HMD user and Non-HMD users, the viewpoints provided to users, and the speech communication structure available among users. Based on this, we verified the core factors through the process of producing assistive asymmetric VR content and cooperative asymmetric VR content directly. Finally, we conducted a survey to examine the users' presence and their experience of the proposed asymmetric VR content and to analyze the application method. As a result, it was confirmed that if the purpose of asymmetric VR content and core factors between the two types of users are clearly distinguished and defined, the independent experience presented by the VR content together with perceived presence can provide a satisfactory experience to all users.

Real-Time Stereoscopic Visualization of Very Large Volume Data on CAVE (CAVE상에서의 방대한 볼륨 데이타의 실시간 입체 영상 가시화)

  • 임무진;이중연;조민수;이상산;임인성
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.8 no.6
    • /
    • pp.679-691
    • /
    • 2002
  • Volume visualization is an important subarea of scientific visualization, and is concerned with techniques that are effectively used in generating meaningful and visual information from abstract and complex volume datasets, defined in three- or higher-dimensional space. It has been increasingly important in various fields including meteorology, medical science, and computational fluid dynamics, and so on. On the other hand, virtual reality is a research field focusing on various techniques that aid gaining experiences in virtual worlds with visual, auditory and tactile senses. In this paper, we have developed a visualization system for CAVE, an immersive 3D virtual environment system, which generates stereoscopic images from huge human volume datasets in real-time using an improved volume visualization technique. In order to complement the 3D texture-mapping based volume rendering methods, that easily slow down as data sizes increase, our system utilizes an image-based rendering technique to guarantee real-time performance. The system has been designed to offer a variety of user interface functionality for effective visualization. In this article, we present detailed description on our real-time stereoscopic visualization system, and show how the Visible Korean Human dataset is effectively visualized on CAVE.

Stereoscopic Free-viewpoint Tour-Into-Picture Generation from a Single Image (단안 영상의 입체 자유시점 Tour-Into-Picture)

  • Kim, Je-Dong;Lee, Kwang-Hoon;Kim, Man-Bae
    • Journal of Broadcast Engineering
    • /
    • v.15 no.2
    • /
    • pp.163-172
    • /
    • 2010
  • The free viewpoint video delivers an active contents where users can see the images rendered from the viewpoints chosen by them. Its applications are found in broad areas, especially museum tour, entertainment and so forth. As a new free-viewpoint application, this paper presents a stereoscopic free-viewpoint TIP (Tour Into Picture) where users can navigate the inside of a single image controlling a virtual camera and utilizing depth data. Unlike conventional TIP methods providing 2D image or video, our proposed method can provide users with 3D stereoscopic and free-viewpoint contents. Navigating a picture with stereoscopic viewing can deliver more realistic and immersive perception. The method uses semi-automatic processing to make foreground mask, background image, and depth map. The second step is to navigate the single picture and to obtain rendered images by perspective projection. For the free-viewpoint viewing, a virtual camera whose operations include translation, rotation, look-around, and zooming is operated. In experiments, the proposed method was tested eth 'Danopungjun' that is one of famous paintings made in Chosun Dynasty. The free-viewpoint software is developed based on MFC Visual C++ and OpenGL libraries.