• Title/Summary/Keyword: human computer interface & interaction

Search Result 156, Processing Time 0.032 seconds

Wireless EMG-based Human-Computer Interface for Persons with Disability

  • Lee, Myoung-Joon;Moon, In-Hyuk;Kim, Sin-Ki;Mun, Mu-Seong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1485-1488
    • /
    • 2003
  • This paper proposes a wireless EMG-based human-computer interface (HCI) for persons with disabilities. For the HCI, four interaction commands are defined by combining three elevation motions of shoulders such as left, right and both elevations. The motions are recognized by comparing EMG signals on the Levator scapulae muscles with double thresholds. A real-time EMG processing hardware is implemented for acquiring EMG signals and recognizing the motions. To achieve real-time processing, filters such as high- and low-pass filter and band-pass and -rejection filter, and a full rectifier and a mean absolute value circuit are embedded on a board with a high speed microprocessor. The recognized results are transferred to a wireless client system such as a mobile robot via a Bluetooth module. From experimental results using the implemented real-time EMG processing hardware, the proposed wireless EMG-based HCI is feasible for the disabled.

  • PDF

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

Sportive Kiosk Interface Design using Tangible Interaction (촉각적 인터랙션을 활용한 유희적 키오스크 인터페이스 디자인)

  • Lim, Byung-Woo;Jo, Dong-Hee;Cho, Yong-Jae
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.5
    • /
    • pp.155-164
    • /
    • 2008
  • Kiosk is an unmanned information system arranged in public places or commercial spaces so that a user may utilize information conveniently. Unlike a personal computer, it targets varied users' brackets, so we have to consider a user's characteristic in designing Kiosk Interface. However, in reality, the Kiosks of public places like subway stations are of Interface Design without considering users and become almost useless with serviceability falling. In this study, we attempt to point out such problems and suggest the concept as to the Interface Design in the public places for a more positive promotion method. For this purpose, we are about to look into the concept of Tangible Interaction and Interspace and the recreation experienced in the process of interaction between a human and a computer and study the sportive Kiosk Interface Design in the Interspace using the principle of the Tangible Interaction. For the conceptual Model in this study, we referred to ARTCOM(ART+COM) Project.

Tangible Tele-Meeting in Tangible Space Initiative

  • Lee, Joong-Jae;Lee, Hyun-Jin;Jeong, Mun-Ho;Jeong, SeongWon;You, Bum-Jae
    • Journal of Electrical Engineering and Technology
    • /
    • v.9 no.2
    • /
    • pp.762-770
    • /
    • 2014
  • Tangible Space Initiative (TSI) is a new framework that can provide a more natural and intuitive Human Computer Interface for users. This is composed of three cooperative components: a Tangible Interface, Responsive Cyber Space, and Tangible Agent. In this paper we present a Tangible Tele-Meeting system in TSI, which allows people to communicate with each other without any spatial limitation. In addition, we introduce a method for registering a Tangible Avatar with a Tangible Agent. The suggested method is based on relative pose estimation between the user and the Tangible Agent. Experimental results show that the user can experience an interaction environment that is more natural and intelligent than that provided by conventional tele-meeting systems.

A Review on the Trends of User-Centered Information Communication (이용자 중심의 정보 커뮤니케이션 동향)

  • Moon, Kyung-Hwa
    • Journal of Information Management
    • /
    • v.33 no.2
    • /
    • pp.49-66
    • /
    • 2002
  • In this paper, the recent trends of user-centered information communication are reviewed, especially focused on Computer-Mediated Communication(CMC) based on internet, Human Factors Engineering, and Human-Computer Interaction(HCI). The result is that human-centered information communication process has influence on the extension of user interface.

Laser pointer detection using neural network for human computer interaction (인간-컴퓨터 상호작용을 위한 신경망 알고리즘기반 레이저포인터 검출)

  • Jung, Chan-Woong;Jeong, Sung-Moon;Lee, Min-Ho
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.1
    • /
    • pp.21-30
    • /
    • 2011
  • In this paper, an effective method to detect the laser pointer on the screen using the neural network algorithm for implementing the human-computer interaction system. The proposed neural network algorithm is used to train the patches without a laser pointer from the input camera images, the trained neural network then generates output values for an input patch from a camera image. If a small variation is perceived in the input camera image, amplify the small variations and detect the laser pointer spot in the camera image. The proposed system consists of a laser pointer, low-price web-camera and image processing program and has a detection capability of laser spot even if the background of computer monitor has a similar color with the laser pointer spot. Therefore, the proposed technique will be contributed to improve the performance of human-computer interaction system.

Development of Wearable Assistance Suite for Interaction with Ubiquitous Environment (유비쿼터스 환경과 상호작용을 위한 착용형 도움 슈트 개발)

  • Seo, Yong-Ho;Han, Tae-Woo
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.9 no.5
    • /
    • pp.93-99
    • /
    • 2009
  • The wearable computer that can understand the context of human life and intelligently communicate with various electronic media in ubiquitous computing environment would be very useful as an assistant for humans. In this paper we introduce an intelligent wearable assistance suite. The proposed wearable suite can interact with both humans and electronic media in ubiquitous computing environment. The developed system can sense the interactive electronic media that a user wants to use and also communicate with it. By utilizing these interaction capabilities, it intermediates between each media and the user and offers a friendlier interface to the user who wears this system. We also show the usages of the proposed system by demonstrating its interaction with the interactive electronic media in ubiquitous computing environment.

  • PDF

Context-Independent Speaker Recognition in URC Environment (지능형 서비스 로봇을 위한 문맥독립 화자인식 시스템)

  • Ji, Mi-Kyong;Kim, Sung-Tak;Kim, Hoi-Rin
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.158-162
    • /
    • 2006
  • This paper presents a speaker recognition system intended for use in human-robot interaction. The proposed speaker recognition system can achieve significantly high performance in the Ubiquitous Robot Companion (URC) environment. The URC concept is a scenario in which a robot is connected to a server through a broadband connection allowing functions to be performed on the server side, thereby minimizing the stand-alone function significantly and reducing the robot client cost. Instead of giving a robot (client) on-board cognitive capabilities, the sensing and processing work are outsourced to a central computer (server) connected to the high-speed Internet, with only the moving capability provided by the robot. Our aim is to enhance human-robot interaction by increasing the performance of speaker recognition with multiple microphones on the robot side in adverse distant-talking environments. Our speaker recognizer provides the URC project with a basic interface for human-robot interaction.

  • PDF

Developing Interactive Game Contents using 3D Human Pose Recognition (3차원 인체 포즈 인식을 이용한 상호작용 게임 콘텐츠 개발)

  • Choi, Yoon-Ji;Park, Jae-Wan;Song, Dae-Hyeon;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.11 no.12
    • /
    • pp.619-628
    • /
    • 2011
  • Normally vision-based 3D human pose recognition technology is used to method for convey human gesture in HCI(Human-Computer Interaction). 2D pose model based recognition method recognizes simple 2D human pose in particular environment. On the other hand, 3D pose model which describes 3D human body skeletal structure can recognize more complex 3D pose than 2D pose model in because it can use joint angle and shape information of body part. In this paper, we describe a development of interactive game contents using pose recognition interface that using 3D human body joint information. Our system was proposed for the purpose that users can control the game contents with body motion without any additional equipment. Poses are recognized comparing current input pose and predefined pose template which is consist of 14 human body joint 3D information. We implement the game contents with the our pose recognition system and make sure about the efficiency of our proposed system. In the future, we will improve the system that can be recognized poses in various environments robustly.

Comparing Elder Users' Interaction Behavior to the Younger: Focusing on Tap, Move and Flick Tasks on a Mobile Touch Screen Device

  • Lim, Ji-Hyoun;Ryu, Tae-Beum
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.413-419
    • /
    • 2012
  • Objective: This study presents an observation and analysis on behavioral characteristics of old users in comparison to young users in the use of control on display interface. Background: Touch interface which allows users to control directly on display, is conceived as delight and easy way of human-computer interaction. Due to the advantage in stimulus-response ensemble, the old users, who typically experiencing difficulties in interacting with computer, would expected to have better experience in using computing machines. Method: Twenty nine participants who are over 50 years old and 14 participants who are in 20s years old were participated in this study. Three primary tasks in touch interface, which are tap, move, and flick, were delivered by the users. For the tap task, response time and point of touch response were collected and the response bias was calculated for each trial. For the move task, delivery time and the distance of finger movements were recorded for each trial. For the flick task, task completion time and flicking distance were recorded. Results: From the collected behavioral data, temporal and spatial differences between young and old users behavior were analyzed. The older users showed difficulty in completing move task requiring eye-hand coordination.