• Title/Summary/Keyword: human computer interface & interaction

Search Result 156, Processing Time 0.036 seconds

W3C based Interoperable Multimodal Communicator (W3C 기반 상호연동 가능한 멀티모달 커뮤니케이터)

  • Park, Daemin;Gwon, Daehyeok;Choi, Jinhuyck;Lee, Injae;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.20 no.1
    • /
    • pp.140-152
    • /
    • 2015
  • HCI(Human Computer Interaction) enables the interaction between people and computers by using a human-familiar interface called as Modality. Recently, to provide an optimal interface according to various devices and service environment, an advanced HCI method using multiple modalities is intensively studied. However, the multimodal interface has difficulties that modalities have different data formats and are hard to be cooperated efficiently. To solve this problem, a multimodal communicator is introduced, which is based on EMMA(Extensible Multimodal Annotation Markup language) and MMI(Multimodal Interaction Framework) of W3C(World Wide Web Consortium) standards. This standard based framework consisting of modality component, interaction manager, and presentation component makes multiple modalities interoperable and provides a wide expansion capability for other modalities. Experimental results show that the multimodal communicator is facilitated by using multiple modalities of eye tracking and gesture recognition for a map browsing scenario.

Natural User Interface with Self-righting Feature using Gravity (중력에 기반한 자연스러운 사용자 인터페이스)

  • Kim, Seung-Chan;Lim, Jong-Gwan;Bianchi, Andrea;Koo, Seong-Yong;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.384-389
    • /
    • 2009
  • In general, gestures can be utilized in human-computer interaction area. Even though the acceleration information is most widely used for the detection of user’s intention, it is hard to use the information under the condition of zero or small variations of gesture velocity due to the inherent characteristics of the accelerometer. In this paper, a natural interaction method which does not require excessive gesture acceleration will be described. Taking advantages of the gravity, the system can generate various types of signals. Also, many problems such as initialization and draft error can be solved using restorative uprighting force of the system.

  • PDF

A Study on Intelligent Emotional Recommendation System Using Biological Information (생체정보를 이용한 지능형 감성 추천시스템에 관한 연구)

  • Kim, Tae-Yeun
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.3
    • /
    • pp.215-222
    • /
    • 2021
  • As the importance of human-computer interaction (Human Computer Interface) technology grows and research on HCI is progressing, it is inferred about the research emotion inference or the computer reaction according to the user's intention, not the computer reaction by the standard input of the user. Stress is an unavoidable result of modern human civilization, and it is a complex phenomenon, and depending on whether or not there is control, human activity ability can be seriously changed. In this paper, we propose an intelligent emotional recommendation system using music as a way to relieve stress after measuring heart rate variability (HRV) and acceleration photoplethymogram (APG) increased through stress as part of human-computer interaction. The differential evolution algorithm was used to extract reliable data by acquiring and recognizing the user's biometric information, that is, the stress index, and emotional inference was made through the semantic web based on the obtained stress index step by step. In addition, by searching and recommending a music list that matches the stress index and changes in emotion, an emotional recommendation system suitable for the user's biometric information was implemented as an application.

A Long-Range Touch Interface for Interaction with Smart TVs

  • Lee, Jaeyeon;Kim, DoHyung;Kim, Jaehong;Cho, Jae-Il;Sohn, Joochan
    • ETRI Journal
    • /
    • v.34 no.6
    • /
    • pp.932-941
    • /
    • 2012
  • A powerful interaction mechanism is one of the key elements for the success of smart TVs, which demand far more complex interactions than traditional TVs. This paper proposes a novel interface based on the famous touch interaction model but utilizes long-range bare hand tracking to emulate touch actions. To satisfy the essential requirements of high accuracy and immediate response, the proposed hand tracking algorithm adopts a fast color-based tracker but with modifications to avoid the problems inherent to those algorithms. By using online modeling and motion information, the sensitivity to the environment can be greatly decreased. Furthermore, several ideas to solve the problems often encountered by users interacting with smart TVs are proposed, resulting in a very robust hand tracking algorithm that works superbly, even for users with sleeveless clothing. In addition, the proposed algorithm runs at a very high speed of 82.73 Hz. The proposed interface is confirmed to comfortably support most touch operations, such as clicks, swipes, and drags, at a distance of three meters, which makes the proposed interface a good candidate for interaction with smart TVs.

Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool

  • Ha, Sang-Ho;Park, So-Young;Hong, Hye-Soo;Kim, Nam-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.593-599
    • /
    • 2012
  • Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT$^{(R)}$ sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.

A Biosignal-Based Human Interface Controlling a Power-Wheelchair for People with Motor Disabilities

  • Kim, Ki-Hong;Kim, Hong-Kee;Kim, Jong-Sung;Son, Wook-Ho;Lee, Soo-Young
    • ETRI Journal
    • /
    • v.28 no.1
    • /
    • pp.111-114
    • /
    • 2006
  • An alternative human interface enabling people with severe motor disabilities to control an assistive system is presented. Since this interface relies on the biosignals originating from the contraction of muscles on the face during particular movements, even individuals with a paralyzed limb can use it with ease. For real-world application, a dedicated hardware module employing a general-purpose digital signal processor was implemented and its validity tested on an electrically powered wheelchair. Furthermore, an additional attempt to reduce error rates to a minimum for stable operation was also made based on the entropy information inherent in the signals during the classification phase. In the experiments, most of the five participating subjects could control the target system at their own will, and thus it is found that the proposed interface can be considered a potential alternative for the interaction of the severely disabled with electronic systems.

  • PDF

A Development of Gesture Interfaces using Spatial Context Information

  • Kwon, Doo-Young;Bae, Ki-Tae
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.29-36
    • /
    • 2011
  • Gestures have been employed for human computer interaction to build more natural interface in new computational environments. In this paper, we describe our approach to develop a gesture interface using spatial context information. The proposed gesture interface recognizes a system action (e.g. commands) by integrating gesture information with spatial context information within a probabilistic framework. Two ontologies of spatial contexts are introduced based on the spatial information of gestures: gesture volume and gesture target. Prototype applications are developed using a smart environment scenario that a user can interact with digital information embedded to physical objects using gestures.

Haptic Modeler using Haptic User Interface (촉감 사용자 인터페이스를 이용한 촉감 모델러)

  • Cha, Jong-Eun;Oakley, Ian;Kim, Yeong-Mi;Kim, Jong-Phil;Lee, Beom-Chan;Seo, Yong-Won;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1031-1036
    • /
    • 2006
  • 햅틱 분야는 디스플레이 되는 콘텐츠를 만질 수 있게 촉감을 제공함으로써 의학, 교육, 군사, 방송 분야 등에서 널리 연구되고 있다. 이미 의학 분야에서는 Reachin 사(社)의 복강경 수술 훈련 소프트웨어와 같이 실제 수술 할 때와 같은 힘을 느끼면서 수술 과정을 훈련할 수 있는 제품이 상용화 되어 있다. 그러나 햅틱 분야가 사용자에게 시청각 정보와 더불어 추가적인 촉감을 제공함으로써 보다 실감 있고 자연스러운 상호작용을 제공하는 장점을 가진 것에 비해 아직은 일반 사용자들에게 생소한 분야다. 그 이유 중 하나로 촉감 상호작용이 가능한 콘텐츠의 부재를 들 수 있다. 일반적으로 촉감 콘텐츠는 컴퓨터 그래픽스 모델로 이루어져 있어 일반 그래픽 모델러를 사용하여 콘텐츠를 생성하나 촉감과 관련된 정보는 콘텐츠를 생성하고 나서 파일에 수작업으로 넣어주거나 각각의 어플리케이션마다 직접 프로그램을 해주어야 한다. 이는 그래픽 모델링과 촉감 모델링이 동시에 진행되지 않기 때문에 발생하는 문제로 촉감 콘텐츠를 만드는데 시간이 많이 소요되고 촉감 정보를 추가하는 작업이 직관적이지 못하다. 그래픽 모델링의 경우 눈으로 보면서 콘텐츠를 손으로 조작할 수 있으나 촉감 모델링의 경우 손으로 촉감을 느끼면서 동시에 조작도 해야 하기 때문에 이에 따른 인터페이스가 필요하다. 본 논문에서는 촉감 상호작용이 가능한 촉감 콘텐츠를 직관적으로 생성하고 조작할 수 있게 하는 촉감 모델러를 기술한다. 촉감 모델러에서 사용자는 3 자유도 촉감 장치를 사용하여 3 차원의 콘텐츠를 실시간으로 만져보면서 생성, 조작할 수 있고 촉감 사용자 인터페이스를 통해서 콘텐츠의 표면 촉감 특성을 직관적으로 편집할 수 있다. 촉감 사용자 인터페이스는 마우스로 조작하는 기존의 2차원 그래픽 사용자 인터페이스와는 다르게 3 차원으로 구성되어 있고 촉감 장치로 조작할 수 있는 버튼, 라디오 버튼, 슬라이더, 조이스틱의 구성요소로 이루어져 있다. 사용자는 각각의 구성요소를 조작하여 콘텐츠의 표면 촉감 특성 값을 바꾸고 촉감 사용자 인터페이스의 한 부분을 만져 그 촉감을 실시간으로 느껴봄으로써 직관적으로 특성 값을 정할 수 있다. 또한, XML 기반의 파일 포맷을 제공함으로써 생성된 콘텐츠를 저장할 수 있고 저장된 콘텐츠를 불러오거나 다른 콘텐츠에 추가할 수 있다.

  • PDF

Development of TTS for a Human-Robot Interface (휴먼-로봇 인터페이스를 위한 TTS의 개발)

  • Bae Jae-Hyun;Oh Yung-Hwan
    • Proceedings of the KSPS conference
    • /
    • 2006.05a
    • /
    • pp.135-138
    • /
    • 2006
  • The communication method between human and robot is one of the important parts for a human-robot interaction. And speech is easy and intuitive communication method for human-being. By using speech as a communication method for robot, we can use robot as familiar way. In this paper, we developed TTS for human-robot interaction. Synthesis algorithms were modified for an efficient utilization of restricted resource in robot. And synthesis database were reconstructed for an efficiency. As a result, we could reduce the computation time with slight degradation of the speech quality.

  • PDF