• Title/Summary/Keyword: human-machine interaction

Search Result 166, Processing Time 0.025 seconds

Study on Gesture and Voice-based Interaction in Perspective of a Presentation Support Tool

  • Ha, Sang-Ho;Park, So-Young;Hong, Hye-Soo;Kim, Nam-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.593-599
    • /
    • 2012
  • Objective: This study aims to implement a non-contact gesture-based interface for presentation purposes and to analyze the effect of the proposed interface as information transfer assisted device. Background: Recently, research on control device using gesture recognition or speech recognition is being conducted with rapid technological growth in UI/UX area and appearance of smart service products which requires a new human-machine interface. However, few quantitative researches on practical effects of the new interface type have been done relatively, while activities on system implementation are very popular. Method: The system presented in this study is implemented with KINECT$^{(R)}$ sensor offered by Microsoft Corporation. To investigate whether the proposed system is effective as a presentation support tool or not, we conduct experiments by giving several lectures to 40 participants in both a traditional lecture room(keyboard-based presentation control) and a non-contact gesture-based lecture room(KINECT-based presentation control), evaluating their interests and immersion based on contents of the lecture and lecturing methods, and analyzing their understanding about contents of the lecture. Result: We check that whether the gesture-based presentation system can play effective role as presentation supporting tools or not depending on the level of difficulty of contents using ANOVA. Conclusion: We check that a non-contact gesture-based interface is a meaningful tool as a sportive device when delivering easy and simple information. However, the effect can vary with the contents and the level of difficulty of information provided. Application: The results presented in this paper might help to design a new human-machine(computer) interface for communication support tools.

Implementation of C-HMI based Real-time Control and Monitoring for Remote Wastewater Reclamation and Reusing System (C-HMI 기반의 원격지 중수도 설비 실시간 제어와 모니터링 구현)

  • Lee, Un-Seon;Park, Man-Gon
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.62 no.5
    • /
    • pp.717-722
    • /
    • 2013
  • The wastewater reclamation and reusing system has been rising as an alternative of water resource exhaustion that the whole world is experiencing. In order to be able to bring about improvement of the existing wastewater reclamation and reusing system, this research has developed of Conversion-Human Machine Interaction (C-HMI) based real-time control and monitoring system such as a sensor module and gate module, web monitoring system. This system was communication almost-error-free in various environment and situation. As a result, we have achieved our goal that has to doing work correctly as a sensor and gateway module that communication error is less than 0.2% throughout the embodied system and add that it can be easily controled and configured as an interface equipment to a complex sensor of water quality. According to this, the construction of a database capable of analyzing and assessing collection, storage and various elements of reliable water quality and flow rate data can be possible.

Empathic MultiMedia and Sensible Interface (감성적 멀티미디어와 감성 인터페이스)

  • 이구형
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1998.04a
    • /
    • pp.183-188
    • /
    • 1998
  • 미래를 예측하는 가장 정확한 방법은 미래를 창조하는 것이다. 본 논문은 멀지않은 미래를 창조하는 것이다. 본 논문은 멀지않은 미래에 우리 앞에 놓일 기술과 제품에 대한 예측과 창조를 위하여 연구된 결과이다. 인간의 생활을 풍요롭게 할 수 있는 기술롸 제품을 인간 중심으로, 인간에게 만족스럽게 개발하기 위해서 인간의 생활 특성과 함께 기계화 인간 사이의 관계를 새롭게 정리하였다. 또 인간의 COMMUNICATION생활에서 그 비중을 급속히 증가시켜 가고 있는 Haman-Machine(Computer) Interaction에 대한 정밀한 고찰을 통하여 인간 중심의 Human-Machine Interaction 을 가능하게 할 Interface로서의 MultiMedia개념을 도출하였다. 개인의 감성은 감정과 구분되는 심리 변화로, 감정에 비하여 강도는 약하나 일상 생활에서 개인의 생각과 행동에 중요한 영향을 미친다. 감성은 외부의 감각자극에 대하여 직관적이고 반사적으로 발생되며, 개인의 생활경험과 상황에 따라 다양하게 변화된다. 제품에 대한 소비자의 욕구는 단순한 보유욕구에서 비교우위욕구, 사용성 욕구를 거쳐 감성욕구로 변환된다. 미래의 인간 생활에 필요한 기술과 제품은 인간의 COMMUNICATION생활과 감성 특성을 반영하여 감성적 MultiMedia 와 감성인터페이스의 개념으로 창조되었다.

  • PDF

A Study on Optimal Layout of Control Buttons on Center Fascia Considering Human Performance under Emergency Situations (돌발 상황 하의 사용자 반응을 고려한 자동차 중앙 계기판 버튼의 최적 배치 방안 연구)

  • Choi, Jun-Young;Kim, Young-Su;Bahn, Sang-Woo;Yun, Myung-Hwan;Lee, Myun-Woo
    • Journal of the Ergonomics Society of Korea
    • /
    • v.29 no.3
    • /
    • pp.365-373
    • /
    • 2010
  • Many studies on safety issues of human-machine interaction are being conducted, especially taking emergency situations into consideration. In light of this view, the importance of objective and reliable measurement of users' reactions under emergency situations is becoming more important than ever in reflecting such issues in the design of everyday things. However, despite the need to consider the human-machine interactions and human performances at the design stage, there were few studies which considered human performances and behaviors under emergency situations. This study is about an evaluation method and design guide to include such human performances under emergency situations during human-machine interactions. This is achieved through an experiment where operators are instructed to press the emergency button at an experimentally designed location under a random emergency situation. By analyzing the results in a human factors perspective, the response time and the accuracy of the operators' behaviors are explained. Analysis revealed that in designing the center fascia for automobiles, there is a tradeoff between response time and accuracy, and the optimal size of buttons differ in each part of the center fascia. This method is expected to be applicable to industrial situations to derive optimal position for emergency buttons.

Discrimination of Three Emotions using Parameters of Autonomic Nervous System Response

  • Jang, Eun-Hye;Park, Byoung-Jun;Eum, Yeong-Ji;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.30 no.6
    • /
    • pp.705-713
    • /
    • 2011
  • Objective: The aim of this study is to compare results of emotion recognition by several algorithms which classify three different emotional states(happiness, neutral, and surprise) using physiological features. Background: Recent emotion recognition studies have tried to detect human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 217 students participated in this experiment. While three kinds of emotional stimuli were presented to participants, ANS responses(EDA, SKT, ECG, RESP, and PPG) as physiological signals were measured in twice first one for 60 seconds as the baseline and 60 to 90 seconds during emotional states. The obtained signals from the session of the baseline and of the emotional states were equally analyzed for 30 seconds. Participants rated their own feelings to emotional stimuli on emotional assessment scale after presentation of emotional stimuli. The emotion classification was analyzed by Linear Discriminant Analysis(LDA, SPSS 15.0), Support Vector Machine (SVM), and Multilayer perceptron(MLP) using difference value which subtracts baseline from emotional state. Results: The emotional stimuli had 96% validity and 5.8 point efficiency on average. There were significant differences of ANS responses among three emotions by statistical analysis. The result of LDA showed that an accuracy of classification in three different emotions was 83.4%. And an accuracy of three emotions classification by SVM was 75.5% and 55.6% by MLP. Conclusion: This study confirmed that the three emotions can be better classified by LDA using various physiological features than SVM and MLP. Further study may need to get this result to get more stability and reliability, as comparing with the accuracy of emotions classification by using other algorithms. Application: This could help get better chances to recognize various human emotions by using physiological signals as well as be applied on human-computer interaction system for recognizing human emotions.

The Proposal of Truck driver's Support System using Purpose Oriented System

  • Oshima, Naoki;Harada, Akira
    • Proceedings of the Korea Society of Design Studies Conference
    • /
    • 2001.10a
    • /
    • pp.27.1-27
    • /
    • 2001
  • The purpose of rhis research is proposing rhe sysrem which does information supporr as for a truck driver, and verifying the validity. First, it investigated by visiting the Transporr Company and interview for the present situation and the opinion on computerization from the operation administrator and truck drivers. Consequently, the problem of computerization could be found out to the present system. Next, the present system was considered. As for present machine system, human's "Choosing a function", srarts processing. Then this system is called "Function-Oriented-System". And three problems were extracted from this sysrem. As a solution of those problems, the Purpose-Oriented-System was proposed. In order to attain user's purpose, Agent that situation is perceived and works a function autonomously assumed that this system was inherent. 3D-Sceanrio-Expression was proposed as the description method of rhe task process. It consists of "Machine and Functional-Item axis", "Time axis", and "Situation-Item axis". And, the task execution process of Function-Oriented-System and Purpose-Oriented-System was compared using 3D-Sceanrio-Expression supposing the scene of truck business. As a resulr, rhe following two things could be found out. (1) The concepr of Purpose-Oriented-System that Agent is inherent is effective as siruarion correspondence machine. (2) A solid scenario can express the interaction rhar cannor be seen, in the relarion of rhe conventional Human and Machine.

  • PDF

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

A Study on Infra-Technology of RCP Interaction System

  • Kim, Seung-Woo;Choe, Jae-Il
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1121-1125
    • /
    • 2004
  • The RT(Robot Technology) has been developed as the next generation of a future technology. According to the 2002 technical report from Mitsubishi R&D center, IT(Information Technology) and RT(Robotic Technology) fusion system will grow five times larger than the current IT market at the year 2015. Moreover, a recent IEEE report predicts that most people will have a robot in the next ten years. RCP(Robotic Cellular Phone), CP(Cellular Phone) having personal robot services, will be an intermediate hi-tech personal machine between one CP a person and one robot a person generations. RCP infra consists of $RCP^{Mobility}$, $RCP^{Interaction}$, $RCP^{Integration}$ technologies. For $RCP^{Mobility}$, human-friendly motion automation and personal service with walking and arming ability are developed. $RCP^{Interaction}$ ability is achieved by modeling an emotion-generating engine and $RCP^{Integration}$ that recognizes environmental and self conditions is developed. By joining intelligent algorithms and CP communication network with the three base modules, a RCP system is constructed. Especially, the RCP interaction system is really focused in this paper. The $RCP^{interaction}$(Robotic Cellular Phone for Interaction) is to be developed as an emotional model CP as shown in figure 1. $RCP^{interaction}$ refers to the sensitivity expression and the link technology of communication of the CP. It is interface technology between human and CP through various emotional models. The interactive emotion functions are designed through differing patterns of vibrator beat frequencies and a feeling system created by a smell injection switching control. As the music influences a person, one can feel a variety of emotion from the vibrator's beats, by converting musical chord frequencies into vibrator beat frequencies. So, this paper presents the definition, the basic theory and experiment results of the RCP interaction system. We confirm a good performance of the RCP interaction system through the experiment results.

  • PDF

Haptic Modeler using Haptic User Interface (촉감 사용자 인터페이스를 이용한 촉감 모델러)

  • Cha, Jong-Eun;Oakley, Ian;Kim, Yeong-Mi;Kim, Jong-Phil;Lee, Beom-Chan;Seo, Yong-Won;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1031-1036
    • /
    • 2006
  • 햅틱 분야는 디스플레이 되는 콘텐츠를 만질 수 있게 촉감을 제공함으로써 의학, 교육, 군사, 방송 분야 등에서 널리 연구되고 있다. 이미 의학 분야에서는 Reachin 사(社)의 복강경 수술 훈련 소프트웨어와 같이 실제 수술 할 때와 같은 힘을 느끼면서 수술 과정을 훈련할 수 있는 제품이 상용화 되어 있다. 그러나 햅틱 분야가 사용자에게 시청각 정보와 더불어 추가적인 촉감을 제공함으로써 보다 실감 있고 자연스러운 상호작용을 제공하는 장점을 가진 것에 비해 아직은 일반 사용자들에게 생소한 분야다. 그 이유 중 하나로 촉감 상호작용이 가능한 콘텐츠의 부재를 들 수 있다. 일반적으로 촉감 콘텐츠는 컴퓨터 그래픽스 모델로 이루어져 있어 일반 그래픽 모델러를 사용하여 콘텐츠를 생성하나 촉감과 관련된 정보는 콘텐츠를 생성하고 나서 파일에 수작업으로 넣어주거나 각각의 어플리케이션마다 직접 프로그램을 해주어야 한다. 이는 그래픽 모델링과 촉감 모델링이 동시에 진행되지 않기 때문에 발생하는 문제로 촉감 콘텐츠를 만드는데 시간이 많이 소요되고 촉감 정보를 추가하는 작업이 직관적이지 못하다. 그래픽 모델링의 경우 눈으로 보면서 콘텐츠를 손으로 조작할 수 있으나 촉감 모델링의 경우 손으로 촉감을 느끼면서 동시에 조작도 해야 하기 때문에 이에 따른 인터페이스가 필요하다. 본 논문에서는 촉감 상호작용이 가능한 촉감 콘텐츠를 직관적으로 생성하고 조작할 수 있게 하는 촉감 모델러를 기술한다. 촉감 모델러에서 사용자는 3 자유도 촉감 장치를 사용하여 3 차원의 콘텐츠를 실시간으로 만져보면서 생성, 조작할 수 있고 촉감 사용자 인터페이스를 통해서 콘텐츠의 표면 촉감 특성을 직관적으로 편집할 수 있다. 촉감 사용자 인터페이스는 마우스로 조작하는 기존의 2차원 그래픽 사용자 인터페이스와는 다르게 3 차원으로 구성되어 있고 촉감 장치로 조작할 수 있는 버튼, 라디오 버튼, 슬라이더, 조이스틱의 구성요소로 이루어져 있다. 사용자는 각각의 구성요소를 조작하여 콘텐츠의 표면 촉감 특성 값을 바꾸고 촉감 사용자 인터페이스의 한 부분을 만져 그 촉감을 실시간으로 느껴봄으로써 직관적으로 특성 값을 정할 수 있다. 또한, XML 기반의 파일 포맷을 제공함으로써 생성된 콘텐츠를 저장할 수 있고 저장된 콘텐츠를 불러오거나 다른 콘텐츠에 추가할 수 있다.

  • PDF

A Comparative Study on Collision Detection Algorithms based on Joint Torque Sensor using Machine Learning (기계학습을 이용한 Joint Torque Sensor 기반의 충돌 감지 알고리즘 비교 연구)

  • Jo, Seonghyeon;Kwon, Wookyong
    • The Journal of Korea Robotics Society
    • /
    • v.15 no.2
    • /
    • pp.169-176
    • /
    • 2020
  • This paper studied the collision detection of robot manipulators for safe collaboration in human-robot interaction. Based on sensor-based collision detection, external torque is detached from subtracting robot dynamics. To detect collision using joint torque sensor data, a comparative study was conducted using data-based machine learning algorithm. Data was collected from the actual 3 degree-of-freedom (DOF) robot manipulator, and the data was labeled by threshold and handwork. Using support vector machine (SVM), decision tree and k-nearest neighbors KNN method, we derive the optimal parameters of each algorithm and compare the collision classification performance. The simulation results are analyzed for each method, and we confirmed that by an optimal collision status detection model with high prediction accuracy.