• Title/Summary/Keyword: Human-computer Interface

Search Result 506, Processing Time 0.026 seconds

A Study on Network System Design for the Support of Multi-Passengers' Multimedia Service Based on HMI (Human Machine Interface) (다인승 차량용 멀티미디어 서비스 지원을 위한 HMI기반 네트워크 시스템 설계에 관한 연구)

  • Lee, Sang-yub;Lee, Jae-kyu;Cho, Hyun-joong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.4
    • /
    • pp.899-903
    • /
    • 2017
  • In this paper, it is shown the in-vehicle network architecture and implementation for multimedia service which supports Human machine interface of multi-passengers. For multi-passengers' vehicle, it has to be considered the factor of network traffic, simultaneously data transferring to multi users and accessibility to use variety of media contents for passengers compared to conventional in-vehicle network architecture system Therefore, it is proposed the change of network architecture compared with general MOST network, implementation of designed software module which can be interoperable between ethernet and MOST network and accessible interface that passenger can be plugged into MOST network platform using their device based on ethernet network system.

The Application of Project control Techniques to Process Control: The Effect of Temporal Information on Human Monitoring Tasks

  • Parush, A.;Shtub, A.;Shavit, D.
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.3 no.1
    • /
    • pp.10-14
    • /
    • 2001
  • We studied the use of time-related information, with and without prediction, to support human operators performing moni-toring and control tasks in the process. Based on monitoring and control techniques used for Project Management we developed a display design for the process industries. A simulated power plant was used to test the hypothesis that availability of predictions along with information on past trends can improve the performances of the human operator handling faults. Several designs of dis-plays were tested in the experiment in which human operators had to detect and handle two types of faults(local and systems wide) in the simulated electricity generation process. Analysis of the results revealed that temporal data, with and without prediction, signifi-cantly reduced response time. Our results encourage the integration of temporal information and prediction in displays used for the control processes to enhance the capabilities of the human operators. Based on the analysis we proposed some guidelines for the de-signer of the human interface of a process control system.

  • PDF

User modeling agent using natural language interface for information retrieval in WWW (자연언어 대화 Interface를 이용한 정보검색 (WWW)에 있어서 사용자 모델 에이젼트)

  • Kim, Do-Wan;Park, Jae-Deuk;Park, Dong-In
    • Annual Conference on Human and Language Technology
    • /
    • 1996.10a
    • /
    • pp.75-84
    • /
    • 1996
  • 인간의 가장 자연스러운 통신 수단은 자연언어이다. 본 논문에서는 자연언어 대화체를 사용한 인터네트 상에서의 정보 검색에 있어서 사용자 모델링 에이젼트 (User modeling Agent or User modeling system)의 모델 형성 기술 및 그의 역할을 서술하고 있다. 사용자 모델은 인간의 심성 모델 (Mental model)에 해당하며, 심성 모델이 사용자가 시스템에 대한 지식과 자신의 문제상황 또는 주변환경에 대하여 가지는 모델임에 반하여, 사용자 모델은 시스템이 사용자의 지식 및 문제 상황을 표상(Representation)하여 형성한 사용자에 대한 모델이다. 따라서 사용자 모델은 시스템의 지능적인 Human Computer Interaction (HCI)의 지원을 위하여 필수적이다. 본 논문에서는 사용자 모델 형성 기술 및 지능형 대화 모델의 지원을 위한 시스템 실례로써 사용자 모델 형성 시스템 $BGP-MS^2$ 와 사용자 모델의 형성을 위하여 구축된 지식베이스 구조를 설명하고 있다.

  • PDF

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Development of an EMG-Based Car Interface Using Artificial Neural Networks for the Physically Handicapped (신경망을 적용한 지체장애인을 위한 근전도 기반의 자동차 인터페이스 개발)

  • Kwak, Jae-Kyung;Jeon, Tae-Woong;Park, Hum-Yong;Kim, Sung-Jin;An, Kwang-Dek
    • Journal of Information Technology Services
    • /
    • v.7 no.2
    • /
    • pp.149-164
    • /
    • 2008
  • As the computing landscape is shifting to ubiquitous computing environments, there is increasingly growing the demand for a variety of device controls that react to user's implicit activities without excessively drawing user attentions. We developed an EMG-based car interface that enables the physically handicapped to drive a car using their functioning peripheral nerves. Our method extracts electromyogram signals caused by wrist movements from four places in the user's forearm and then infers the user's intent from the signals using multi-layered neural nets. By doing so, it makes it possible for the user to control the operation of car equipments and thus to drive the car. It also allows the user to enter inputs into the embedded computer through a user interface like an instrument LCD panel. We validated the effectiveness of our method through experimental use in a car built with the EMG-based interface.

A study on Real-time Graphic User Interface for Hidden Target Segmentation (은닉표적의 분할을 위한 실시간 Graphic User Interface 구현에 관한 연구)

  • Yeom, Seokwon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.17 no.2
    • /
    • pp.67-70
    • /
    • 2016
  • This paper discusses a graphic user interface(GUI) for the concealed target segmentation. The human subject hiding a metal gun is captured by the passive millimeter wave(MMW) imaging system. The imaging system operates on the regime of 8 mm wavelength. The MMW image is analyzed by the multi-level segmentation to segment and identify a concealed weapon under clothing. The histogram of the passive MMW image is modeled with the Gaussian mixture distribution. LBG vector quantization(VQ) and expectation and maximization(EM) algorithms are sequentially applied to segment the body and the object area. In the experiment, the GUI is implemented by the MFC(Microsoft Foundation Class) and the OpenCV(Computer Vision) libraries and tested in real-time showing the efficiency of the system.

Head tracking system using image processing (영상처리를 이용한 머리의 움직임 추적 시스템)

  • 박경수;임창주;반영환;장필식
    • Journal of the Ergonomics Society of Korea
    • /
    • v.16 no.3
    • /
    • pp.1-10
    • /
    • 1997
  • This paper is concerned with the development and evaluation of the camera calibration method for a real-time head tracking system. Tracking of head movements is important in the design of an eye-controlled human/computer interface and the area of virtual environment. We proposed a video-based head tracking system. A camera was mounted on the subject's head and it took the front view containing eight 3-dimensional reference points(passive retr0-reflecting markers) fixed at the known position(computer monitor). The reference points were captured by image processing board. These points were used to calculate the position (3-dimensional) and orientation of the camera. A suitable camera calibration method for providing accurate extrinsic camera parameters was proposed. The method has three steps. In the first step, the image center was calibrated using the method of varying focal length. In the second step, the focal length and the scale factor were calibrated from the Direct Linear Transformation (DLT) matrix obtained from the known position and orientation of the camera. In the third step, the position and orientation of the camera was calculated from the DLT matrix, using the calibrated intrinsic camera parameters. Experimental results showed that the average error of camera positions (3- dimensional) is about $0.53^{\circ}C$, the angular errors of camera orientations are less than $0.55^{\circ}C$and the data aquisition rate is about 10Hz. The results of this study can be applied to the tracking of head movements related to the eye-controlled human/computer interface and the virtual environment.

  • PDF

3D Face Tracking using Particle Filter based on MLESAC Motion Estimation (MLESAC 움직임 추정 기반의 파티클 필터를 이용한 3D 얼굴 추적)

  • Sung, Ha-Cheon;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.8
    • /
    • pp.883-887
    • /
    • 2010
  • 3D face tracking is one of essential techniques in computer vision such as surveillance, HCI (Human-Computer Interface), Entertainment and etc. However, 3D face tracking demands high computational cost. It is a serious obstacle to applying 3D face tracking to mobile devices which usually have low computing capacity. In this paper, to reduce computational cost of 3D tracking and extend 3D face tracking to mobile devices, an efficient particle filtering method using MLESAC(Maximum Likelihood Estimation SAmple Consensus) motion estimation is proposed. Finally, its speed and performance are evaluated experimentally.

Optimal Display-Control Gain of the Foot-Controlled Isotonic Mouse on a Target Acquisition Task (목표점 선택작업에서 등력성 발 마우스의 최적 반응 - 조종 이득)

  • Lee, Kyung-Tae;Jang, Phil-Sik;Lee, Dong-Hyun
    • IE interfaces
    • /
    • v.17 no.1
    • /
    • pp.113-120
    • /
    • 2004
  • The increased use of computers has introduced a variety kind of human-computer interfaces. Mouse is one of the useful interface tools to place the cursor on the desired position on the monitor. This paper suggested a foot controlled isotonic mouse which was similar to the ordinary hand-controlled mouse except that positioning was controlled by the right foot and the clicking was performed by the left foot. Experimental results showed that both the index of difficulty(IOD) and the display-control gain(DC gain) varied the total movement time in a target acquisition task on the monitor. The present authors also drew the optimal display-control gain of the foot-controlled isotonic mouse over the index of difficulty of 1.0 to 3.0. The optimal display-control gain, i. e., 0.256, could be used when designing a foot-controlled isotonic mouse.

Robot User Control System using Hand Gesture Recognizer (수신호 인식기를 이용한 로봇 사용자 제어 시스템)

  • Shon, Su-Won;Beh, Joung-Hoon;Yang, Cheol-Jong;Wang, Han;Ko, Han-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.4
    • /
    • pp.368-374
    • /
    • 2011
  • This paper proposes a robot control human interface using Markov model (HMM) based hand signal recognizer. The command receiving humanoid robot sends webcam images to a client computer. The client computer then extracts the intended commanding hum n's hand motion descriptors. Upon the feature acquisition, the hand signal recognizer carries out the recognition procedure. The recognition result is then sent back to the robot for responsive actions. The system performance is evaluated by measuring the recognition of '48 hand signal set' which is created randomly using fundamental hand motion set. For isolated motion recognition, '48 hand signal set' shows 97.07% recognition rate while the 'baseline hand signal set' shows 92.4%. This result validates the proposed hand signal recognizer is indeed highly discernable. For the '48 hand signal set' connected motions, it shows 97.37% recognition rate. The relevant experiments demonstrate that the proposed system is promising for real world human-robot interface application.