• Title/Summary/Keyword: Robot Interaction

Search Result 481, Processing Time 0.02 seconds

Development of Bio-sensor-Based Feature Extraction and Emotion Recognition Model (바이오센서 기반 특징 추출 기법 및 감정 인식 모델 개발)

  • Cho, Ye Ri;Pae, Dong Sung;Lee, Yun Kyu;Ahn, Woo Jin;Lim, Myo Taeg;Kang, Tae Koo
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.67 no.11
    • /
    • pp.1496-1505
    • /
    • 2018
  • The technology of emotion recognition is necessary for human computer interaction communication. There are many cases where one cannot communicate without considering one's emotion. As such, emotional recognition technology is an essential element in the field of communication. n this regard, it is highly utilized in various fields. Various bio-sensor sensors are used for human emotional recognition and can be used to measure emotions. This paper proposes a system for recognizing human emotions using two physiological sensors. For emotional classification, two-dimensional Russell's emotional model was used, and a method of classification based on personality was proposed by extracting sensor-specific characteristics. In addition, the emotional model was divided into four emotions using the Support Vector Machine classification algorithm. Finally, the proposed emotional recognition system was evaluated through a practical experiment.

An integrate information technology model during earthquake dynamics

  • Chen, Chen-Yuan;Chen, Ying-Hsiu;Yu, Shang-En;Chen, Yi-Wen;Li, Chien-Chung
    • Structural Engineering and Mechanics
    • /
    • v.44 no.5
    • /
    • pp.633-647
    • /
    • 2012
  • Applying Information Technology (IT) in practical engineering has become one of the most important issues in the past few decades, especially on internal solitary wave, intelligent robot interaction, artificial intelligence, fuzzy Lyapunov, tension leg platform (TLP), consumer and service quality. Other than affecting the traditional teaching mode or increasing the inter-relation with users, IT can also be connected with the current society by collecting the latest information from the internet. It is apparently a fashion-catching-up technology. Therefore, the learning of how to use IT facilities is becoming one of engineers' skills nowadays. In addition to studying how well engineers learn to operate IT facilities and apply them into teaching, how engineers' general capacity of information effects the results of learning IT are also discussed. This research introduces the "Combined TAM and TPB mode," to understand the situation of engineers using IT facilities.

Robust Real-time Tracking of Facial Features with Application to Emotion Recognition (안정적인 실시간 얼굴 특징점 추적과 감정인식 응용)

  • Ahn, Byungtae;Kim, Eung-Hee;Sohn, Jin-Hun;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.4
    • /
    • pp.266-272
    • /
    • 2013
  • Facial feature extraction and tracking are essential steps in human-robot-interaction (HRI) field such as face recognition, gaze estimation, and emotion recognition. Active shape model (ASM) is one of the successful generative models that extract the facial features. However, applying only ASM is not adequate for modeling a face in actual applications, because positions of facial features are unstably extracted due to limitation of the number of iterations in the ASM fitting algorithm. The unaccurate positions of facial features decrease the performance of the emotion recognition. In this paper, we propose real-time facial feature extraction and tracking framework using ASM and LK optical flow for emotion recognition. LK optical flow is desirable to estimate time-varying geometric parameters in sequential face images. In addition, we introduce a straightforward method to avoid tracking failure caused by partial occlusions that can be a serious problem for tracking based algorithm. Emotion recognition experiments with k-NN and SVM classifier shows over 95% classification accuracy for three emotions: "joy", "anger", and "disgust".

Measurement on range of two degrees of freedom motion for analytic generation of workspace (작업영역의 해석적 생성을 위한 2자유도 동작의 동작범위 측정)

  • 기도형
    • Journal of the Ergonomics Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.15-24
    • /
    • 1996
  • To generate workspace analytically using the robot kinematics, data on range of human joints motion, especially range of two degrees of freedom motion, are needed. However, these data have not been investigated up to now. Therefore, in this research, we are to investigate an interaction effect of motions with two degrees of freedom occurred simultaneously at the shoulder, virtual hip(L5/S1) and hip joints, respectively, for 47 young male students. When motion with two degrees of freedom occurred at a joint such as shoulder, virtual hip and hip joints, it was found from the results of ANOVA that the action of a degree of freedom motion may either decrease or increase the effective functioning of the other degree of freedom motion. In other words, the shoulder flexion was decreased as the shoulder was adducted or abducted to $60^{\circ}C$TEX>or abducted from $60^{\circ}C$TEX>to maximum degree of abduction, while the shoulder flexion increased as the joint was abducted from $60^{\circ}C$TEX> to $60^{\circ}C$TEX> The flexion was decreased as the virtual hip was bent laterally at the virtual hip joint, and also did as the hip was adducted or abducted from the neutral position. It is expected that workspace can be generated more precisely based the data on the range of two degrees of joint motion measured in this study.

  • PDF

Emotion Recognition Based on Human Gesture (인간의 제스쳐에 의한 감정 인식)

  • Song, Min-Kook;Park, Jin-Bae;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.17 no.1
    • /
    • pp.46-51
    • /
    • 2007
  • This paper is to present gesture analysis for human-robot interaction. Understanding human emotions through gesture is one of the necessary skills fo the computers to interact intelligently with their human counterparts. Gesture analysis is consisted of several processes such as detecting of hand, extracting feature, and recognizing emotions. For efficient operation we used recognizing a gesture with HMM(Hidden Markov Model). We constructed a large gesture database, with which we verified our method. As a result, our method is successfully included and operated in a mobile system.

Shared Vehicle Teleoperation using a Virtual Driving Interface (가상 운전 인터페이스를 활용한 자동차 협력 원격조종)

  • Kim, Jae-Seok;Lee, Kwang-Hyun;Ryu, Jee-Hwan
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.3
    • /
    • pp.243-249
    • /
    • 2015
  • In direct vehicle teleoperation, a human operator drives a vehicle at a distance through a pair of master and slave device. However, if there is time delay, it is difficult to remotely drive the vehicle due to slow response. In order to address this problem, we introduced a novel methodology of shared vehicle teleoperation using a virtual driving interface. The methodology was developed with four components: 1) virtual driving environment, 2) interface for virtual driving environment, 3) path generator based on virtual driving trajectory, 4) path following controller. Experimental results showed the effectiveness of the proposed approach in simple and cluttered driving environment as well. In the experiments, we compared two sampling methods, fixed sampling time and user defined instant, and finally merged method showed best remote driving performance in term of completion time and number of collision.

Development of the MVS (Muscle Volume Sensor) for Human-Machine Interface (인간-기계 인터페이스를 위한 근 부피 센서 개발)

  • Lim, Dong Hwan;Lee, Hee Don;Kim, Wan Soo;Han, Jung Soo;Han, Chang Soo;An, Jae Yong
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.30 no.8
    • /
    • pp.870-877
    • /
    • 2013
  • There has been much recent research interest in developing numerous kinds of human-machine interface. This field currently requires more accurate and reliable sensing systems to detect the intended human motion. Most conventional human-machine interface use electromyography (EMG) sensors to detect the intended motion. However, EMG sensors have a number of disadvantages and, as a consequence, the human-machine interface is difficult to use. This study describes a muscle volume sensor (MVS) that has been developed to measure variation in the outline of a muscle, for use as a human-machine interface. We developed an algorithm to calibrate the system, and the feasibility of using MVS for detecting muscular activity was demonstrated experimentally. We evaluated the performance of the MVS via isotonic contraction using the KIN-COM$^{(R)}$ equipment at torques of 5, 10, and 15 Nm.

A Study on Dynamic Modeling for Underwater Tracked Vehicle (트랙기반 수중건설로봇의 운동 모델링에 관한 연구)

  • Choi, Dong-Ho;Lee, Young-Jin;Hong, Sung-Min;Vu, Mai The;Choi, Hyeung-Sik;Kim, Joon-Young
    • Journal of Ocean Engineering and Technology
    • /
    • v.29 no.5
    • /
    • pp.386-391
    • /
    • 2015
  • The mobility of tracked vehicles is mainly influenced by the interaction between the tracks and soil. When the track of a tracked vehicle rotates, there will be a slip effect between the track and the soil, which creates a track shear force and the vehicle’s driving force. In this paper, the modeling of a working tool such as a trenching cutter and a tracked vehicle that is the lower frame of a track-based operating robot was performed. In addition, a numerical simulation was executed to verify the performance of the design objectives and the motion characteristics of the combined system.

AUTOMATION AND ROBOT APPLICATION IN AGRICULTURAL PRODUCTIONS AND BIO-INDUSTRIES

  • Sevila, Francis
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1996.06c
    • /
    • pp.142-159
    • /
    • 1996
  • Engineering of automated tools for the agro-food industries and the rural world activities have to pick up two challenges : to answer the immediate important problems related to the situation of these industries, and to imaging the tools that their professional will need next century. Creating or modifying automated tools in the next few will be made taking into account parameters either technical (environmental protection, health and safety), or social and economical (investment , employment). There will be a strong interaction with disciplines like ecology, medicine, ergonomy, psycho-sociology , etc. , The partners for such a research, tools manufactures and users, should have an early involvement in its content, in order to find rapidly the solution to the drastic problems they are meeting. On a longer term , during the next 20 years , there will be an important evolution of the rural space management and of the food processes. This will imply the emergence of new types of activities and know-how's , with lines of automated tools to be invented and developed , like : micro-system for organic localized tasks -mobile and adaptive equipments highly autonomous for natural space actions - device for perception , decision and control reproducing automatically the expert behaviors of human operators. Design of such automated tools need to overcome technological difficulties like the automation of the expert-decision process, or the management of complex design.

  • PDF

Real-Time Face Tracking Algorithm Robust to illumination Variations (조명 변화에 강인한 실시간 얼굴 추적 알고리즘)

  • Lee, Yong-Beom;You, Bum-Jae;Lee, Seong-Whan;Kim, Kwang-Bae
    • Proceedings of the KIEE Conference
    • /
    • 2000.07d
    • /
    • pp.3037-3040
    • /
    • 2000
  • Real-Time object tracking has emerged as an important component in several application areas including machine vision. surveillance. Human-Computer Interaction. image-based control. and so on. And there has been developed various algorithms for a long time. But in many cases. they have showed limited results under uncontrolled situation such as illumination changes or cluttered background. In this paper. we present a novel. computationally efficient algorithm for tracking human face robustly under illumination changes and cluttered backgrounds. Previous algorithms usually defines color model as a 2D membership function in a color space without consideration for illumination changes. Our new algorithm developed here. however. constructs a 3D color model by analysing plenty of images acquired under various illumination conditions. The algorithm described is applied to a mobile head-eye robot and experimented under various uncontrolled environments. It can track an human face more than 100 frames per second excluding image acquisition time.

  • PDF