• Title/Summary/Keyword: Human Machine Interface

Search Result 336, Processing Time 0.042 seconds

Discriminant Analysis of Human's Implicit Intent based on Eyeball Movement (안구운동 기반의 사용자 묵시적 의도 판별 분석 모델)

  • Jang, Young-Min;Mallipeddi, Rammohan;Kim, Cheol-Su;Lee, Minho
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.6
    • /
    • pp.212-220
    • /
    • 2013
  • Recently, there has been tremendous increase in human-computer/machine interaction system, where the goal is to provide with an appropriate service to the user at the right time with minimal human inputs for human augmented cognition system. To develop an efficient human augmented cognition system based on human computer/machine interaction, it is important to interpret the user's implicit intention, which is vague, in addition to the explicit intention. According to cognitive visual-motor theory, human eye movements and pupillary responses are rich sources of information about human intention and behavior. In this paper, we propose a novel approach for the identification of human implicit visual search intention based on eye movement pattern and pupillary analysis such as pupil size, gradient of pupil size variation, fixation length/count for the area of interest. The proposed model identifies the human's implicit intention into three types such as navigational intent generation, informational intent generation, and informational intent disappearance. Navigational intent refers to the search to find something interesting in an input scene with no specific instructions, while informational intent refers to the search to find a particular target object at a specific location in the input scene. In the present study, based on the human eye movement pattern and pupillary analysis, we used a hierarchical support vector machine which can detect the transitions between the different implicit intents - navigational intent generation to informational intent generation and informational intent disappearance.

Speech Emotion Recognition on a Simulated Intelligent Robot (모의 지능로봇에서의 음성 감정인식)

  • Jang Kwang-Dong;Kim Nam;Kwon Oh-Wook
    • MALSORI
    • /
    • no.56
    • /
    • pp.173-183
    • /
    • 2005
  • We propose a speech emotion recognition method for affective human-robot interface. In the Proposed method, emotion is classified into 6 classes: Angry, bored, happy, neutral, sad and surprised. Features for an input utterance are extracted from statistics of phonetic and prosodic information. Phonetic information includes log energy, shimmer, formant frequencies, and Teager energy; Prosodic information includes Pitch, jitter, duration, and rate of speech. Finally a pattern classifier based on Gaussian support vector machines decides the emotion class of the utterance. We record speech commands and dialogs uttered at 2m away from microphones in 5 different directions. Experimental results show that the proposed method yields $48\%$ classification accuracy while human classifiers give $71\%$ accuracy.

  • PDF

The Mold Close and Open Control of Injection Molding Machine Using Fuzzy Algorithm

  • Park, Jin-Hyun;Lee, Young-Kwan;Kim, Hun-Mo
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.575-579
    • /
    • 2005
  • In this paper, the development of an IMM(Injection Molding Machine) controller is discussed. Presently, the Mold Close and Open Control Method of a toggle-type IMM is open-loop control. Through the development, a PC based control system was built instead of an existing controller and a closed-loop control replaced the previous control method by using PC based PLC. To control the nonlinear system of toggle type clamping unit, a fuzzy PI control algorithm was selected and it was programmed by an IL(Instruction List) and a LD(Ladder Diagram) on a PC based PLC. The application of fuzzy algorithm as the control method was also considered to change a control object like a mold replacement or an additional apparatus. For the development of an IMM controller, PC based PLC of PCI card type, distributed I/O modules with CANopen and Industrial PC and HMI (Human Machine Interface) software were used.

  • PDF

The Development of Data Capturing Modules by Speech-Voice Recognition (음성인식에 의한 측량자료취득 모듈개발)

  • 조규전;이영진;차득기
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.18 no.3
    • /
    • pp.279-285
    • /
    • 2000
  • Men's desire for the human interface, due to the development of voice processing technology of computer, and the development of intelligent MMI (Man-Machine Interface) computer technology enabled us to operate computers with our voice without using keyboards or other input systems. Especially, by obtaining field data and layout from the complicated surveying environment and applying the voice recognition technology to the actual surveying work, we can save a lot of working hours and costs. According to the result of this study, the real time Geo-Coding and graphic data-coding were possible with only 25 words by connecting the software engine which recognizes 50,000 different words and the voice recognition technology based on the super IC which recognizes 60 different words with the Total-station and the RTK-GPS.

  • PDF

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

Human factors engineering progrma in nuclear power plant (원자력 발전소 인간공학 프로그램)

  • 나정창;이호형
    • Proceedings of the ESK Conference
    • /
    • 1996.10a
    • /
    • pp.125-140
    • /
    • 1996
  • Human Factors Engineering(HFE) program should be developed from the early stage of the design process for Nuclear Power Plant. The HFE program is conducted in accordance with the guidance in the Standard Review Plan(SRP) NUREG 0800, Chapter 18. The major purpose of this program is to reduce the incidence of human error during the operating life of the plants. A comprehensive human factors program is prepared by KOPEC to assure that key elements of human factors involvement are not inadvertently overlooked and the early, complete, and continuing inclusion of HFE in the design process. This paper is to introduce engineering steps of the HF activities to verify that the HF involvements on man-machine interface are adequate to support safe and efficient operation of nuclear power plant. If systems are developed without sufficient consideration on the HFE in the design, such systems may cost a high price due to the malfunction of the plant induced by the human errors.

  • PDF

Imaging a scene from experience given verbal experssions

  • Sakai, Y.;Kitazawa, M.;Takahashi, S.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1995.10a
    • /
    • pp.307-310
    • /
    • 1995
  • In the conventional systems, a human must have knowledge of machines and of their special language in communicating with machines. In one side, it is desirable for a human but in another side, it is true that achieving it is very elaborate and is also a significant cause of human error. To reduce this sort of human load, an intelligent man-machine interface is desirable to exist between a human operator and machines to be operated. In the ordinary human communication, not only linguistic information but also visual information is effective, compensating for each others defect. From this viewpoint, problem of translating verbal expressions to some visual image is discussed here in this paper. The location relation between any two objects in a visual scene is a key in translating verbal information to visual information, as is the case in Fig.l. The present translation system advances in knowledge with experience. It consists of Japanese Language processing, image processing, and Japanese-scene translation functions.

  • PDF

Integrated Approach of Multiple Face Detection for Video Surveillance

  • Kim, Tae-Kyun;Lee, Sung-Uk;Lee, Jong-Ha;Kee, Seok-Cheol;Kim, Sang-Ryong
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.1960-1963
    • /
    • 2003
  • For applications such as video surveillance and human computer interface, we propose an efficiently integrated method to detect and track faces. Various visual cues are combined to the algorithm: motion, skin color, global appearance and facial pattern detection. The ICA (Independent Component Analysis)-SVM (Support Vector Machine based pattern detection is performed on the candidate region extracted by motion, color and global appearance information. Simultaneous execution of detection and short-term tracking also increases the rate and accuracy of detection. Experimental results show that our detection rate is 91% with very few false alarms running at about 4 frames per second for 640 by 480 pixel images on a Pentium IV 1㎓.

  • PDF

Biosign Recognition based on the Soft Computing Techniques with application to a Rehab -type Robot

  • Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.29.2-29
    • /
    • 2001
  • For the design of human-centered systems in which a human and machine such as a robot form a human-in system, human-friendly interaction/interface is essential. Human-friendly interaction is possible when the system is capable of recognizing human biosigns such as5 EMG Signal, hand gesture and facial expressions so the some humanintention and/or emotion can be inferred and is used as a proper feedback signal. In the talk, we report our experiences of applying the Soft computing techniques including Fuzzy, ANN, GA and rho rough set theory for efficiently recognizing various biosigns and for effective inference. More specifically, we first observe characteristics of various forms of biosigns and propose a new way of extracting feature set for such signals. Then we show a standardized procedure of getting an inferred intention or emotion from the signals. Finally, we present examples of application for our model of rehabilitation robot named.

  • PDF

A Study on the Multi-sensory Usability Evaluation of Haptic Device in Vehicle (차량용 햅틱 디바이스의 다감각 사용성 평가 연구)

  • Kim, Hyeon-Seok;Lee, Sang-Jin;Kim, Byeong-Woo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.11
    • /
    • pp.4968-4974
    • /
    • 2012
  • A haptic device is regarded as the human machine interface technology for easier, more accurate, and intuitive operation. The purpose of this paper is to study how to improve the cognitive ability of the existing vehicle haptic device used only tactile feedback. In this study, usability evaluation used the multi-sensory feedback which is adding auditory feedback to the existing tactile feedback. The emotional factor that drivers have on the haptic device is extracted by the sensibility analysis. The result of study provides some consideration and direction to need in implementation of a haptic device and it also confirms their possibility meaningfully. And it is possible to suggest the design direction that satisfies the driver.