• Title/Summary/Keyword: Multi-Modal Interface

Search Result 39, Processing Time 0.027 seconds

Teleloperation of Field Mobile Manipulator with Wearable Haptic-based Multi-Modal User Interface and Its Application to Explosive Ordnance Disposal

  • Ryu Dongseok;Hwang Chang-Soon;Kang Sungchul;Kim Munsang;Song Jae-Bok
    • Journal of Mechanical Science and Technology
    • /
    • v.19 no.10
    • /
    • pp.1864-1874
    • /
    • 2005
  • This paper describes a wearable multi-modal user interface design and its implementation for a teleoperated field robot system. Recently some teleoperated field robots are employed for hazard environment applications (e.g. rescue, explosive ordnance disposal, security). To complete these missions in outdoor environment, the robot system must have appropriate functions, accuracy and reliability. However, the more functions it has, the more difficulties occur in operation of the functions. To cope up with this problem, an effective user interface should be developed. Furthermore, the user interface is needed to be wearable for portability and prompt action. This research starts at the question: how to teleoperate the complicated slave robot easily. The main challenge is to make a simple and intuitive user interface with a wearable shape and size. This research provides multi-modalities such as visual, auditory and haptic sense. It enables an operator to control every functions of a field robot more intuitively. As a result, an EOD (explosive ordnance disposal) demonstration is conducted to verify the validity of the proposed wearable multi-modal user interface.

A Study on Developmental Direction of Interface Design for Gesture Recognition Technology

  • Lee, Dong-Min;Lee, Jeong-Ju
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.499-505
    • /
    • 2012
  • Objective: Research on the transformation of interaction between mobile machines and users through analysis on current gesture interface technology development trend. Background: For smooth interaction between machines and users, interface technology has evolved from "command line" to "mouse", and now "touch" and "gesture recognition" have been researched and being used. In the future, the technology is destined to evolve into "multi-modal", the fusion of the visual and auditory senses and "3D multi-modal", where three dimensional virtual world and brain waves are being used. Method: Within the development of computer interface, which follows the evolution of mobile machines, actively researching gesture interface and related technologies' trend and development will be studied comprehensively. Through investigation based on gesture based information gathering techniques, they will be separated in four categories: sensor, touch, visual, and multi-modal gesture interfaces. Each category will be researched through technology trend and existing actual examples. Through this methods, the transformation of mobile machine and human interaction will be studied. Conclusion: Gesture based interface technology realizes intelligent communication skill on interaction relation ship between existing static machines and users. Thus, this technology is important element technology that will transform the interaction between a man and a machine more dynamic. Application: The result of this study may help to develop gesture interface design currently in use.

Brain Computer Interfacing: A Multi-Modal Perspective

  • Fazli, Siamac;Lee, Seong-Whan
    • Journal of Computing Science and Engineering
    • /
    • v.7 no.2
    • /
    • pp.132-138
    • /
    • 2013
  • Multi-modal techniques have received increasing interest in the neuroscientific and brain computer interface (BCI) communities in recent times. Two aspects of multi-modal imaging for BCI will be reviewed. First, the use of recordings of multiple subjects to help find subject-independent BCI classifiers is considered. Then, multi-modal neuroimaging methods involving combined electroencephalogram and near-infrared spectroscopy measurements are discussed, which can help achieve enhanced and robust BCI performance.

A Multi Modal Interface for Mobile Environment (모바일 환경에서의 Multi Modal 인터페이스)

  • Seo, Yong-Won;Lee, Beom-Chan;Lee, Jun-Hun;Kim, Jong-Phil;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.666-671
    • /
    • 2006
  • 'Multi modal 인터페이스'란 인간과 기계의 통신을 위해 음성, 키보드, 펜을 이용, 인터페이스를 하는 방법을 말한다. 최근 들어 많은 휴대용 단말기가 보급 되고, 단말기가 소형화, 지능화 되어가고, 단말기의 어플리케이션도 다양해짐에 따라 사용자가 보다 편리하고 쉽게 사용할 수 있는 입력 방법에 기대치가 높아가고 있다. 현재 휴대용 단말기에 가능한 입력장치는 단지 단말기의 버튼이나 터치 패드(PDA 경우)이다. 하지만 장애인의 경우 버튼이나 터치 패드를 사용하기 어렵고, 휴대용 단말기로 게임을 하는데 있어서도, 어려움이 많으며 새로운 게임이나 어플리케이션 개발에도 많은 장애요인이 되고 있다. 이런 문제점들은 극복하기 위하여, 본 논문에서는 휴대용 단말기의 새로운 Multi Modal 인터페이스를 제시 하였다. PDA(Personal Digital Assistants)를 이용하여 더 낳은 재미와 실감을 줄 수 있는 Multi Modal 인터페이스를 개발하였다. 센서를 이용하여 휴대용 단말기를 손목으로 제어를 가능하게 함으로서, 사용자에게 편리하고 색다른 입력 장치를 제공 하였다. 향후 음성 인식 기능이 추가 된다면, 인간과 인간 사이의 통신은 음성과 제스처를 이용하듯이 기계에서는 전통적으로 키보드 나 버튼을 사용하지 않고 인간처럼 음성과 제스처를 통해 통신할 수 있을 것이다. 또한 여기에 진동자를 이용하여 촉감을 부여함으로써, 그 동안 멀티 모달 인터페이스에 소외된 시각 장애인, 노약자들에게도 정보를 제공할 수 있다. 실제로 사람은 시각이나 청각보다 촉각에 훨씬 빠르게 반응한다. 이 시스템을 게임을 하는 사용자한테 적용한다면, 능동적으로 게임참여 함으로서 좀더 실감나는 재미를 제공할 수 있다. 특수한 상황에서는 은밀한 정보를 제공할 수 있으며, 앞으로 개발될 모바일 응용 서비스에 사용될 수 있다.

  • PDF

A Study on the Multi-Modal Browsing System by Integration of Browsers Using lava RMI (자바 RMI를 이용한 브라우저 통합에 의한 멀티-모달 브라우징 시스템에 관한 연구)

  • Jang Joonsik;Yoon Jaeseog;Kim Gukboh
    • Journal of Internet Computing and Services
    • /
    • v.6 no.1
    • /
    • pp.95-103
    • /
    • 2005
  • Recently researches about multi-modal system has been studied widely and actively, Such multi-modal systems are enable to increase possibility of HCI(Human-computer Interaction) realization, enable to provide information in various ways and also enable to be applicable in e-business application, If ideal multi-modal system can be realized in future, eventually user can maximize interactive usability between information instrument and men in hands-free and eyes-free, In this paper, a new multi-modal browsing system using Java RMI as communication interface, which integrated by HTML browser and voice browser is suggested and also English-English dictionary search application system is implemented as example.

  • PDF

The Effects of Multi-Modality on the Use of Smart Phones

  • Lee, Gaeun;Kim, Seongmin;Choe, Jaeho;Jung, Eui Seung
    • Journal of the Ergonomics Society of Korea
    • /
    • v.33 no.3
    • /
    • pp.241-253
    • /
    • 2014
  • Objective: The objective of this study was to examine multi-modal interaction effects of input-mode switching on the use of smart phones. Background: Multi-modal is considered as an efficient alternative for input and output of information in mobile environments. However, there are various limitations in current mobile UI (User Interface) system that overlooks the transition between different modes or the usability of a combination of multi modal uses. Method: A pre-survey determined five representative tasks from smart phone tasks by their functions. The first experiment involved the use of a uni-mode for five single tasks; the second experiment involved the use of a multi-mode for three dual tasks. The dependent variables were user preference and task completion time. The independent variable in the first experiment was the type of modes (i.e., Touch, Pen, or Voice) while the variable in the second experiment was the type of tasks (i.e., internet searching, subway map, memo, gallery, and application store). Results: In the first experiment, there was no difference between the uses of pen and touch devices. However, a specific mode type was preferred depending on the functional characteristics of the tasks. In the second experiment, analysis of results showed that user preference depended on the order and combination of modes. Even with the transition of modes, users preferred the use of multi-modes including voice. Conclusion: The order of combination of modes may affect the usability of multi-modes. Therefore, when designing a multi-modal system, the fact that there are frequent transitions between various mobile contents in different modes should be properly considered. Application: It may be utilized as a user-centered design guideline for mobile multi modal UI system.

Study about Windows System Control Using Gesture and Speech Recognition (제스처 및 음성 인식을 이용한 윈도우 시스템 제어에 관한 연구)

  • 김주홍;진성일이남호이용범
    • Proceedings of the IEEK Conference
    • /
    • 1998.10a
    • /
    • pp.1289-1292
    • /
    • 1998
  • HCI(human computer interface) technologies have been often implemented using mouse, keyboard and joystick. Because mouse and keyboard are used only in limited situation, More natural HCI methods such as speech based method and gesture based method recently attract wide attention. In this paper, we present multi-modal input system to control Windows system for practical use of multi-media computer. Our multi-modal input system consists of three parts. First one is virtual-hand mouse part. This part is to replace mouse control with a set of gestures. Second one is Windows control system using speech recognition. Third one is Windows control system using gesture recognition. We introduce neural network and HMM methods to recognize speeches and gestures. The results of three parts interface directly to CPU and through Windows.

  • PDF

A Full Body Gumdo Game with an Intelligent Cyber Fencer using Multi-modal(3D Vision and Speech) Interface (멀티모달 인터페이스(3차원 시각과 음성 )를 이용한 지능적 가상검객과의 전신 검도게임)

  • 윤정원;김세환;류제하;우운택
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.9 no.4
    • /
    • pp.420-430
    • /
    • 2003
  • This paper presents an immersive multimodal Gumdo simulation game that allows a user to experience the whole body interaction with an intelligent cyber fencer. The proposed system consists of three modules: (i) a nondistracting multimodal interface with 3D vision and speech (ii) an intelligent cyber fencer and (iii) an immersive feedback by a big screen and sound. First, the multimodal Interface with 3D vision and speech allows a user to move around and to shout without distracting the user. Second, an intelligent cyber fencer provides the user with intelligent interactions by perception and reaction modules that are created by the analysis of real Gumdo game. Finally, an immersive audio-visual feedback by a big screen and sound effects helps a user experience an immersive interaction. The proposed system thus provides the user with an immersive Gumdo experience with the whole body movement. The suggested system can be applied to various applications such as education, exercise, art performance, etc.

Multi-modal Sense based Interface for Augmented Reality in Table Top Display (테이블 탑 디스플레이 기반 증강현실 구현을 위한 다중 감각 지원 인터페이스)

  • Jeong, Jong-Mun;Yang, Hyung-Jeong;Kim, Sun-Hee
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.5
    • /
    • pp.708-716
    • /
    • 2009
  • Applications which are implemented on Table Top Display are controlled by hands, so that they support an intuitive interface to users. Users feel the real sense when they interact on the virtual scene in Table Top Display. However, most of conventional augmented reality applications on Table Top Display satisfy only visual sense. In this paper, we propose an interface that supports multi-modal sense in that tactile sense is utilized for augment reality by vibrating a physical control unit when it collides to virtual objects. Users can feel the collision in addition to visual scene. The proposed system facilitates tactile augmented reality through an air hockey game. A physical control unit vibrates when it receives virtual collision data over wireless communication. Since the use of tabletop display environment is extended with a tactile sense based physical unit other than hand, it provides a more intuitive interface.

  • PDF

Intelligent Emotional Interface for Personal Robot and Its Application to a Humanoid Robot, AMIET

  • Seo, Yong-Ho;Jeong, Il-Woong;Jung, Hye-Won;Yang, Hyun-S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1764-1768
    • /
    • 2004
  • In the near future, robots will be used for the personal use. To provide useful services to humans, it will be necessary for robots to understand human intentions. Consequently, the development of emotional interfaces for robots is an important expansion of human-robot interactions. We designed and developed an intelligent emotional interface for the robot, and applied the interfaces to our humanoid robot, AMIET. Subsequent human-robot interaction demonstrated that our intelligent emotional interface is very intuitive and friendly

  • PDF