• Title/Summary/Keyword: 인터랙션방법

Search Result 199, Processing Time 0.021 seconds

A Study on the Gesture Based Virtual Object Manipulation Method in Multi-Mixed Reality

  • Park, Sung-Jun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.2
    • /
    • pp.125-132
    • /
    • 2021
  • In this paper, We propose a study on the construction of an environment for collaboration in mixed reality and a method for working with wearable IoT devices. Mixed reality is a mixed form of virtual reality and augmented reality. We can view objects in the real and virtual world at the same time. And unlike VR, MR HMD does not occur the motion sickness. It is using a wireless and attracting attention as a technology to be applied in industrial fields. Myo wearable device is a device that enables arm rotation tracking and hand gesture recognition by using a triaxial sensor, an EMG sensor, and an acceleration sensor. Although various studies related to MR are being progressed, discussions on developing an environment in which multiple people can participate in mixed reality and manipulating virtual objects with their own hands are insufficient. In this paper, We propose a method of constructing an environment where collaboration is possible and an interaction method for smooth interaction in order to apply mixed reality in real industrial fields. As a result, two people could participate in the mixed reality environment at the same time to share a unified object for the object, and created an environment where each person could interact with the Myo wearable interface equipment.

UI Design of the MMORPG using Storytelling (스토리텔링을 적용한 MMORPG UI 디자인)

  • Yoo, Wang-Yun
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.3
    • /
    • pp.118-126
    • /
    • 2009
  • The computer game has a narrative structure, which scenario is proceeded by player's consecutive interactive. The interaction is the core of game plays that is not only storytelling in itself, but also is implemented by interface. The UI of computer game is as a tool for communication between player and game, which has been designed by usability on use. However, is it the necessary for us to get synchronization of feelings about game scenario in terms of a psychological in order to deepen the pleasure that is the game's ultimate goal. Effectively, as a way to lead in this empathy. This study tries to application of storytelling to the UI that occurred player's interactive for the first time. Moreover, presented the result as the UI designs of casual MMORPG.

Interactive Motion Retargeting for Humanoid in Constrained Environment (제한된 환경 속에서 휴머노이드를 위한 인터랙티브 모션 리타겟팅)

  • Nam, Ha Jong;Lee, Ji Hye;Choi, Myung Geol
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.1-8
    • /
    • 2017
  • In this paper, we introduce a technique to retarget human motion data to the humanoid body in a constrained environment. We assume that the given motion data includes detailed interactions such as holding the object by hand or avoiding obstacles. In addition, we assume that the humanoid joint structure is different from the human joint structure, and the shape of the surrounding environment is different from that at the time of the original motion. Under such a condition, it is also difficult to preserve the context of the interaction shown in the original motion data, if the retargeting technique that considers only the change of the body shape. Our approach is to separate the problem into two smaller problems and solve them independently. One is to retarget motion data to a new skeleton, and the other is to preserve the context of interactions. We first retarget the given human motion data to the target humanoid body ignoring the interaction with the environment. Then, we precisely deform the shape of the environmental model to match with the humanoid motion so that the original interaction is reproduced. Finally, we set spatial constraints between the humanoid body and the environmental model, and restore the environmental model to the original shape. To demonstrate the usefulness of our method, we conducted an experiment by using the Boston Dynamic's Atlas robot. We expected that out method can help the humanoid motion tracking problem in the future.

Design of Vision-based Interaction Tool for 3D Interaction in Desktop Environment (데스크탑 환경에서의 3차원 상호작용을 위한 비전기반 인터랙션 도구의 설계)

  • Choi, Yoo-Joo;Rhee, Seon-Min;You, Hyo-Sun;Roh, Young-Sub
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.421-434
    • /
    • 2008
  • As computer graphics, virtual reality and augmented reality technologies have been developed, in many application areas based on those techniques, interaction for 3D space is required such as selection and manipulation of an 3D object. In this paper, we propose a framework for a vision-based 3D interaction which enables to simulate functions of an expensive 3D mouse for a desktop environment. The proposed framework includes a specially manufactured interaction device using three-color LEDs. By recognizing position and color of the LED from video sequences, various events of the mouse and 6 DOF interactions are supported. Since the proposed device is more intuitive and easier than an existing 3D mouse which is expensive and requires skilled manipulation, it can be used without additional learning or training. In this paper, we explain methods for making a pointing device using three-color LEDs which is one of the components of the proposed framework, calculating 3D position and orientation of the pointer and analyzing color of the LED from video sequences. We verify accuracy and usefulness of the proposed device by showing a measurement result of an error of the 3D position and orientation.

Content Types and Interactions Suitable for Digital Signage at Seoul Subway Stations as a Communication Platform for Citizens (시민들의 소통 플랫폼으로서 서울시 지하철 디지털 사이니지에 적합한 콘텐츠 및 인터랙션 연구)

  • Kang, Minjeong;Eune, Juhyun
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.1
    • /
    • pp.337-350
    • /
    • 2017
  • As interactive digital signages with computer network have been increasingly popular in public space, the needs for studying content types and interactions suitable for the digital signages are emerging. In this study, we aim to find out what contents are preferred to share and how to interact in the digital signage installed in Seoul subway stations. We conduct a case study, a survey, and an in-depth interview. In the case study, we collect content types which have been communicated in public space. Among these content types, we conduct a survey to find out the preferred contents of Seoul citizens. We observe that the preferred contents differ by the age of the citizens. Specifically, 20's prefer the subjects about personal interests, 40's choose public interests and 30's prefer both types. In addition, we observe that the subjects about personal interest are relatively less preferred when people read, whereas the subjects are preferred when people write. Lastly, regarding interaction type, we find that Seoul citizens prefer touching the screen directly to using smart phone to express their empathy. The contents that people write and the number of likes will be archived as big data and analyzed so that Seoul city officers and citizens will know what people think based on age, location, and subjects.

Webized Tangible Space (웹-기반 Tangible Space)

  • Ko, Heedong;Seo, Daeil;Yoo, Byounghyun
    • Journal of the Korea Computer Graphics Society
    • /
    • v.23 no.3
    • /
    • pp.77-85
    • /
    • 2017
  • Tangible Space is a new emerging interaction space with mobile AR/VR computing and ubiquitous computing environment with IoT. Tangible Space spans from a physical environment augmented with virtual entities to immersive virtual environments mirroring the physical environment. Interacting with Tangible Space is logged just like interacting with the Web. By webizing Tangible Space, we can gain persistence as a by-product so that human life experience in the physical environment can be logged and shared just like the information being created and shared in the current Web. The result is a powerful future direction of the web from a World Wide Web of Information to World Wide Web of Life experiences.

Development of a Interactive Stereoscopic Image Display System using Invisible Interaction Surface Generation (비가시성 인터랙션 표면 생성을 통한 인터랙티브 입체영상 시연 시스템 개발)

  • Lee, Dong-Hoon;Yang, Hwang-Kyu
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.6 no.3
    • /
    • pp.371-379
    • /
    • 2011
  • In this paper, we propose a development methodology of interactive stereoscopic image display system. In our case, we consider a multiple touch recognition technique as the interaction method. That's because we want to guarantee multiple user access and interaction to the content without any restriction. In this case, however, some restrictions are occurred on account of the distance between display and participants. For this reason, this paper propose an invisible interaction surfaces which are generated in the air. This surface is utilized as interaction medium instead of the display wall. We also present an effective way to generate and edit interactive stereoscopic images based on Game Engine.

Ambient Display: Picture Navigation Based on User Movement (앰비언트 디스플레이: 사용자 위치 이동 기반의 사진 내비게이션)

  • Yoon, Yeo-Jin;Ryu, Han-Sol;Park, Chan-Yong;Park, Soo-Jun;Choi, Soo-Mi
    • Journal of the HCI Society of Korea
    • /
    • v.2 no.2
    • /
    • pp.27-34
    • /
    • 2007
  • In ubiquitous computing, there is increasing demand for ubiquitous displays that react to a user's actions. We propose a method of navigating pictures on an ambient display using implicit interactions. The ambient display can identify the user and measure how far away they are using an RFID reader and ultrasonic sensors. When the user is a long way from the display, it acts as a digital picture and does not attract attention. When the user comes within an appropriate range for interaction, the display shows pictures that are related to the user and provides quasi-3D navigation using the TIP(tour into the picture) method. In addition, menus can be manipulated directly on a touch-screen or remotely using an air mouse. In an emergency, LEDs around the display flash to alert the user.

  • PDF

The Method That Access Various Interaction Information of IPTV Contents with QR Code and SmartPhone (QR코드를 이용한 IPTV 콘텐츠의 인터랙션 정보 접근 방법)

  • Sim, Kun-Ho;Lim, Young-Hwan
    • Journal of Broadcast Engineering
    • /
    • v.17 no.2
    • /
    • pp.295-304
    • /
    • 2012
  • IPTV current service change from a TV IPTV service providing digital broadcast, one- way VOD service, terrestrial broadcast to a web IPTV service,which is using wireless networks which is also used by interactive digital broadcasting However, IPTV, is being currently serviced, causes a lot of interaction with viewers because of inconvenience of using and has problems that users do not have enough content type to access. Additionally, there is also a problem that adding contents cover a large part of screen. To solve these problems, the thing that adding QR code to IPTV screen being recognized by Smartphone. In this way, we can provide the idea how to access to interaction information and also provide to clients an easy way creating and accessing to the contents by developing an editor that can insert QR code to video.

Design of dataglove based multimodal interface for 3D object manipulation in virtual environment (3 차원 오브젝트 직접조작을 위한 데이터 글러브 기반의 멀티모달 인터페이스 설계)

  • Lim, Mi-Jung;Park, Peom
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1011-1018
    • /
    • 2006
  • 멀티모달 인터페이스는 인간의 제스처, 시선, 손의 움직임, 행동의 패턴, 음성, 물리적인 위치 등 인간의 자연스러운 행동들에 대한 정보를 해석하고 부호화하는 인지기반 기술이다. 본 논문에서는 제스처와 음성, 터치를 이용한 3D 오브젝트 기반의 멀티모달 인터페이스를 설계, 구현한다. 서비스 도메인은 스마트 홈이며 사용자는 3D 오브젝트 직접조작을 통해 원격으로 가정의 오브젝트들을 모니터링하고 제어할 수 있다. 멀티모달 인터랙션 입출력 과정에서는 여러 개의 모달리티를 병렬적으로 인지하고 처리해야 하기 때문에 입출력 과정에서 각 모달리티의 조합과 부호화 방법, 입출력 형식 등이 문제시된다. 본 연구에서는 모달리티들의 특징과 인간의 인지구조 분석을 바탕으로 제스처, 음성, 터치 모달리티 간의 입력조합방식을 제시하고 멀티모달을 이용한 효율적인 3D Object 인터랙션 프로토타입을 설계한다.

  • PDF