• Title/Summary/Keyword: 휴먼-컴퓨터 인터랙션

Search Result 10, Processing Time 0.018 seconds

Opportunities and Future Directions of Human-Metaverse Interaction (휴먼-메타버스 인터랙션의 기회와 발전방향)

  • Yoon, Hyoseok;Park, ChangJu;Park, Jung Yeon
    • Smart Media Journal
    • /
    • v.11 no.6
    • /
    • pp.9-17
    • /
    • 2022
  • In the COVID-19 pandemic era, non-contact services were demanded and the use of extended reality and metaverse services increased rapidly in various applications. In this paper, we analyze Gather.town, ifland, Roblox, and ZEPETO metaverse platforms in terms of user interaction, avatar-based interaction, and virtual world authoring. Especially, we distinguish interactions among user input techniques that occur in the real world, avatar representation techniques to represent users in the virtual world, and interaction types that create a virtual world through user participation. Based on this work, we highlight the current trends and needs of human-metaverse interaction and forecast future opportunities and research directions.

Laser Pointer Interaction System Based on Image Processing (영상처리 기반의 레이저 포인터 인터랙션 시스템)

  • Kim, Nam-Woo;Lee, Seung-Jae;Lee, Joon-Jae;Lee, Byung-Gook
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.3
    • /
    • pp.373-385
    • /
    • 2008
  • The evolution of input device for computer has pretty much slowed down after the introduction of mouse as feinting input device. Even though stylus and touch screen were invented later on which provide some alternatives, all these methods were designed to have close range interaction with computer. There are not many options available for user to interact with computer from afar, which is especially needed during presentation. Therefore, in this paper, we try to fill the gap by proposing a laser pointer interaction system to allow user to give pointing input command to the computer from some distance away using only laser pointer, which is cheap and readily available. With the combination of image processing based software, we could provide mouse-like pointing interaction with computer. The proposed system works well not only in currently plane screen, but also in flexible screen by incorporating the feature of non-linear coordinate mapping algorithm in our system so that our system can support non-linear environment, such as curved and flexible wall.

  • PDF

Design and Implementation of a Stereoscopic Image Control System based on User Hand Gesture Recognition (사용자 손 제스처 인식 기반 입체 영상 제어 시스템 설계 및 구현)

  • Song, Bok Deuk;Lee, Seung-Hwan;Choi, HongKyw;Kim, Sung-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.3
    • /
    • pp.396-402
    • /
    • 2022
  • User interactions are being developed in various forms, and in particular, interactions using human gestures are being actively studied. Among them, hand gesture recognition is used as a human interface in the field of realistic media based on the 3D Hand Model. The use of interfaces based on hand gesture recognition helps users access media media more easily and conveniently. User interaction using hand gesture recognition should be able to view images by applying fast and accurate hand gesture recognition technology without restrictions on the computer environment. This paper developed a fast and accurate user hand gesture recognition algorithm using the open source media pipe framework and machine learning's k-NN (K-Nearest Neighbor). In addition, in order to minimize the restriction of the computer environment, a stereoscopic image control system based on user hand gesture recognition was designed and implemented using a web service environment capable of Internet service and a docker container, a virtual environment.

Experiencing the 3D Color Environment: Understanding User Interaction with a Virtual Reality Interface (3차원 가상 색채 환경 상에서 사용자의 감성적 인터랙션에 관한 연구)

  • Oprean, Danielle;Yoon, So-Yeon
    • Science of Emotion and Sensibility
    • /
    • v.13 no.4
    • /
    • pp.789-796
    • /
    • 2010
  • The purpose of this study was to test a large screen and rear-projected virtual reality (VR) interface in color choice for environmental design. The study piloted a single three-dimensional model of a bedroom including furniture in different color combinations. Using a mouse with an $8'{\times}6'$ rear-projector screen, participants could move 360 degree motion in each room. The study used 34 college students who viewed and interacted with virtual rooms projected on a large screen, then filled out a survey. This study aimed to understand the interaction between the users and the VR interface through measurable dimensions of the interaction: interest and user perceptions of presence and emotion. Specifically, the study focused on spatial presence, topic involvement, and enjoyment. Findings should inform design researchers how empirical evidence involving environmental effects can be obtained using a VR interface and how users experience the interaction with the interface.

  • PDF

Object Detection Using Predefined Gesture and Tracking (약속된 제스처를 이용한 객체 인식 및 추적)

  • Bae, Dae-Hee;Yi, Joon-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.17 no.10
    • /
    • pp.43-53
    • /
    • 2012
  • In the this paper, a gesture-based user interface based on object detection using predefined gesture and the tracking of the detected object is proposed. For object detection, moving objects in a frame are computed by comparing multiple previous frames and predefined gesture is used to detect the target object among those moving objects. Any object with the predefined gesture can be used to control. We also propose an object tracking algorithm, namely density based meanshift algorithm, that uses color distribution of the target objects. The proposed object tracking algorithm tracks a target object crossing the background with a similar color more accurately than existing techniques. Experimental results show that the proposed object detection and tracking algorithms achieve higher detection capability with less computational complexity.

Eye Gaze toy Human Computer Interaction (눈동자의 움직임을 이용한 휴먼 컴퓨터 인터랙션)

  • 권기문;이정준;박강령;김재희
    • Proceedings of the IEEK Conference
    • /
    • 2003.11b
    • /
    • pp.83-86
    • /
    • 2003
  • This paper suggests user's interface with computer by means of detecting gaze under HMD, head mounted display, environment. System is derived as follows; firstly, calibrate a camera in HMD, which determines geometrical relationship between monitor and captured image. Second, detect the center of pupil using algorithm of the center of mass and represent a gazing position on the computer screen. If user blinks or stares at a certain position for a while, message is sent to computer. Experimental results show the center of mass is robust against glint effects, and detecting error was 7.1%. and 4.85% in vertical and horizontal direction, respectively. To adjust detailed movement of a mouse takes 0.8 sec more. The 98% of blinking is detected successfully and 94% of clicking detection is resulted.

  • PDF

Performance Improvement of Eye Tracking System using Reinforcement Learning (강화학습을 이용한 눈동자 추적 시스템의 성능향상)

  • Shin, Hak-Chul;Shen, Yan;Khim, Sarang;Sung, WonJun;Ahmed, Minhaz Uddin;Hong, Yo-Hoon;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.171-179
    • /
    • 2013
  • Recognition and image processing technology depends on illumination variation. One of the most important factors is the parameters of algorithms. When it comes to select these values, the system has different types of recognition accuracy. In this paper, we propose performance improvement of the eye tracking system that depends on some environments such as, people, location, and illumination. Optimized threshold parameter was decided by using reinforcement learning. When the system accuracy goes down, reinforcement learning used to train the value of parameters. According to the experimental results, the performance of eye tracking system can be improved from 3% to 14% by using reinforcement learning. The improved eye tracking system can be effectively used for human-computer interaction.

TMCS : Tangible Media Control System (감각형 미디어 제어 시스템)

  • 오세진;장세이;우운택
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.10
    • /
    • pp.1356-1363
    • /
    • 2004
  • We propose Tangible Media Control System (TMCS), which allows users to manipulate media contents with physical objects in an intuitive way. Currently, most people access digital media contents by exploiting GUI. However, It provides limited manipulations of the media contents. The proposed system, instead of mouse and keyboard, adopts two types of tangible objects, i.e RFID-enabled object and tracker-embedded object. The TMCS enables users to easily access and control digital media contents with the tangible objects. In addition, it supports an interactive media controller which users can synthesize media contents and generate new media contents according to users' taste. It also offers personalized contents, which is suitable for users' preferences, by exploiting context such as user's profile and situational information. Therefore. the proposed system can be applied to various interactive applications such as multimedia education, entertainment and multimedia editor.

AIM: Design and Implementation of Agent-based Intelligent Middleware for Ubiquitous HCI Environments (AIM: 유비쿼터스 HCI 환경을 위한 에이전트 기반 지능형 미들웨어 설계 및 구현)

  • Jang, Hyun-Su;Kim, Youn-Woo;Choi, Jung-Hwan;Kang, Dong-Hyun;Song, Chang-Hwan;Eom, Young-Ik
    • The KIPS Transactions:PartA
    • /
    • v.16A no.1
    • /
    • pp.43-54
    • /
    • 2009
  • With the emergence of ubiquitous computing era, it has become increasingly important for a middleware which takes full advantage of HCI factors to support user-centric services. Many kinds of studies on HCI-friendly middleware for supporting user-centric services have been performed. However, previous studies have problems in supporting HCI factors, which are needed for user-centric services. In this paper, we present an agent-based intelligent middleware, which is called AIM, that provides user-centric services in ubiquitous HCI environments. We describe the middleware requirements for user-centric services by analyzing various HCI-friendly middleware and design AIM middleware which effectively supports various HCI factors such as context information management, pattern inference of user's behavior, and dynamic agent generation, etc. We introduce service scenarios based on the user's modalities in smart spaces. Finally, prototype implementation is illustrated as a manifestation of the benefits of the introduced infrastructure.

Human Gesture Recognition Technology Based on User Experience for Multimedia Contents Control (멀티미디어 콘텐츠 제어를 위한 사용자 경험 기반 동작 인식 기술)

  • Kim, Yun-Sik;Park, Sang-Yun;Ok, Soo-Yol;Lee, Suk-Hwan;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.10
    • /
    • pp.1196-1204
    • /
    • 2012
  • In this paper, a series of algorithms are proposed for controlling different kinds of multimedia contents and realizing interact between human and computer by using single input device. Human gesture recognition based on NUI is presented firstly in my paper. Since the image information we get it from camera is not sensitive for further processing, we transform it to YCbCr color space, and then morphological processing algorithm is used to delete unuseful noise. Boundary Energy and depth information is extracted for hand detection. After we receive the image of hand detection, PCA algorithm is used to recognize hand posture, difference image and moment method are used to detect hand centroid and extract trajectory of hand movement. 8 direction codes are defined for quantifying gesture trajectory, so the symbol value will be affirmed. Furthermore, HMM algorithm is used for hand gesture recognition based on the symbol value. According to series of methods we presented, we can control multimedia contents by using human gesture recognition. Through large numbers of experiments, the algorithms we presented have satisfying performance, hand detection rate is up to 94.25%, gesture recognition rate exceed 92.6%, hand posture recognition rate can achieve 85.86%, and face detection rate is up to 89.58%. According to these experiment results, we can control many kinds of multimedia contents on computer effectively, such as video player, MP3, e-book and so on.